Test Report: Docker_Linux_crio 21794

                    
                      1ae3cc206fa1c5283cece957f99367f4350f676e:2025-10-25:42054
                    
                

Test fail (38/326)

Order failed test Duration
29 TestAddons/serial/Volcano 0.26
35 TestAddons/parallel/Registry 18.47
36 TestAddons/parallel/RegistryCreds 0.41
37 TestAddons/parallel/Ingress 148.9
38 TestAddons/parallel/InspektorGadget 5.25
39 TestAddons/parallel/MetricsServer 5.3
41 TestAddons/parallel/CSI 33.23
42 TestAddons/parallel/Headlamp 2.72
43 TestAddons/parallel/CloudSpanner 5.25
44 TestAddons/parallel/LocalPath 10.12
45 TestAddons/parallel/NvidiaDevicePlugin 5.26
46 TestAddons/parallel/Yakd 5.25
47 TestAddons/parallel/AmdGpuDevicePlugin 5.25
97 TestFunctional/parallel/ServiceCmdConnect 602.9
114 TestFunctional/parallel/ServiceCmd/DeployApp 600.63
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.96
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.84
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
153 TestFunctional/parallel/ServiceCmd/Format 0.54
154 TestFunctional/parallel/ServiceCmd/URL 0.54
190 TestJSONOutput/pause/Command 2.23
196 TestJSONOutput/unpause/Command 1.76
247 TestPreload 437.96
265 TestPause/serial/Pause 5.58
346 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.36
351 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.36
356 TestStartStop/group/newest-cni/serial/Pause 5.94
358 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.21
365 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.43
374 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.16
377 TestStartStop/group/no-preload/serial/Pause 6.22
381 TestStartStop/group/old-k8s-version/serial/Pause 5.93
387 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.13
391 TestStartStop/group/embed-certs/serial/Pause 5.97
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-273872 addons disable volcano --alsologtostderr -v=1: exit status 11 (257.77276ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:01:51.997750  143671 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:01:51.998040  143671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:01:51.998050  143671 out.go:374] Setting ErrFile to fd 2...
	I1025 09:01:51.998054  143671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:01:51.998295  143671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:01:51.998614  143671 mustload.go:65] Loading cluster: addons-273872
	I1025 09:01:51.998982  143671 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:01:51.998998  143671 addons.go:606] checking whether the cluster is paused
	I1025 09:01:51.999081  143671 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:01:51.999098  143671 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:01:51.999515  143671 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:01:52.018179  143671 ssh_runner.go:195] Run: systemctl --version
	I1025 09:01:52.018238  143671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:01:52.036648  143671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:01:52.135289  143671 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:01:52.135437  143671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:01:52.164135  143671 cri.go:89] found id: "6acc989b2a2225d89c0139e5d01a7c3e722a17bacc777211a249505e0c98dfde"
	I1025 09:01:52.164163  143671 cri.go:89] found id: "3ef84406aa71455b6de1dd991735b55151760349fe2324174f32003ba3bab3a6"
	I1025 09:01:52.164168  143671 cri.go:89] found id: "bfbbb33612538e8ef5fcb258abbd95f420b5a5ac465ecbcf4d458c6bc6e2e38e"
	I1025 09:01:52.164173  143671 cri.go:89] found id: "30d14efd00c17dff3baa060c7f2eacaea9fee261ffff3a40817d920c70f7a1b1"
	I1025 09:01:52.164177  143671 cri.go:89] found id: "3c4dfd048ae14042cb2dd535dfd35f2830a4290f9ef179dd30ae8ebba1c31a9e"
	I1025 09:01:52.164187  143671 cri.go:89] found id: "7ed2f0ed5954858ab8b256dd7a28ee29951b8dc0b80a0b3be518e80869d79f4f"
	I1025 09:01:52.164190  143671 cri.go:89] found id: "9fc2a24b06ef7a84582e95e03fcb1a9f5fa59ca6e653388015de5bce16b2098b"
	I1025 09:01:52.164194  143671 cri.go:89] found id: "36e423e3e9d3f8607f12ae97290100bad7e6a20a2f191e6f20e0a9dbd1c955bd"
	I1025 09:01:52.164197  143671 cri.go:89] found id: "8f0ebcd8090442d43ac07f440e77c7fb785f836534fea4fbd3af7f5a9d5c92a3"
	I1025 09:01:52.164210  143671 cri.go:89] found id: "428c8023af396511adb70251f87e08e7a0348af7ea7b391566b9f6d720846eae"
	I1025 09:01:52.164214  143671 cri.go:89] found id: "63ac188b24d3aece814f7965aeb3fc8826585e8716e0d9712e6c24c67de79b2e"
	I1025 09:01:52.164217  143671 cri.go:89] found id: "d2cd04d0db0a96294bc519f8d661edb9555660536f71eaa38a63faa12c9ecd60"
	I1025 09:01:52.164221  143671 cri.go:89] found id: "99c81d2cbcf1373b9e986edd9cb06fe6e17af80281bd513e9c184715993690af"
	I1025 09:01:52.164225  143671 cri.go:89] found id: "9fe9c1838c296605cfd15a7d3a82dcf768d949c52d59c1d953a6e6031f8e6bb0"
	I1025 09:01:52.164229  143671 cri.go:89] found id: "a768f7fc3ff87846a8a1fc193f45e90d420c2384014e24853286ce24205e39e9"
	I1025 09:01:52.164239  143671 cri.go:89] found id: "5123be046b86f9088a95642cee7771736e4aa6d00228c4b52c9ce8fe6fc983d1"
	I1025 09:01:52.164247  143671 cri.go:89] found id: "0c53c0cc8c97408e395761582dcb19a6bd13bdb6fdb20adbe17e7425844245e6"
	I1025 09:01:52.164254  143671 cri.go:89] found id: "f6a1623c75ccd3731e08ba7c5cf4f2e2d4981b7012e2cb63e51a031c2d0839da"
	I1025 09:01:52.164257  143671 cri.go:89] found id: "856adda6d4a269f0840b32ee45117e16786dc583569513442f2836ffdeae8b23"
	I1025 09:01:52.164261  143671 cri.go:89] found id: "b61ce248f4c774901b5b79e3a742ad5afdba36e0d2fa91f7059ea628af2578fa"
	I1025 09:01:52.164264  143671 cri.go:89] found id: "d47c77a17465c61f43d01df2e570cf4f0920d4333585ba36bb3b062b0ad245b6"
	I1025 09:01:52.164268  143671 cri.go:89] found id: "8ce2136d4288fb4d8468a78bac8ea32ab90854d7bd4416ca9904da1040df01fa"
	I1025 09:01:52.164277  143671 cri.go:89] found id: "274bb680de1b51fcc087361608941e440ab97122abfb1cdd94dbb7ad5d9f4afa"
	I1025 09:01:52.164282  143671 cri.go:89] found id: "34b878e3a18d682bb517910ab586818dedf3985d76e5dfb859b8c455fef6342f"
	I1025 09:01:52.164289  143671 cri.go:89] found id: ""
	I1025 09:01:52.164343  143671 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:01:52.178901  143671 out.go:203] 
	W1025 09:01:52.180175  143671 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:01:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:01:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:01:52.180198  143671 out.go:285] * 
	* 
	W1025 09:01:52.183487  143671 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:01:52.184729  143671 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-273872 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.154558ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-9qs7h" [ab90902a-730d-4265-a4c9-7e84180f5480] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003451861s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-s6vt6" [e6258e29-be09-4ade-b9f6-99c705fbac83] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003835839s
addons_test.go:392: (dbg) Run:  kubectl --context addons-273872 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-273872 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-273872 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.017796493s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 ip
2025/10/25 09:02:20 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-273872 addons disable registry --alsologtostderr -v=1: exit status 11 (240.894184ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:02:20.288868  146486 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:02:20.289141  146486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:20.289152  146486 out.go:374] Setting ErrFile to fd 2...
	I1025 09:02:20.289157  146486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:20.289375  146486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:02:20.289635  146486 mustload.go:65] Loading cluster: addons-273872
	I1025 09:02:20.289976  146486 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:20.289994  146486 addons.go:606] checking whether the cluster is paused
	I1025 09:02:20.290080  146486 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:20.290094  146486 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:02:20.290476  146486 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:02:20.308196  146486 ssh_runner.go:195] Run: systemctl --version
	I1025 09:02:20.308251  146486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:02:20.325844  146486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:02:20.424138  146486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:02:20.424221  146486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:02:20.453511  146486 cri.go:89] found id: "6acc989b2a2225d89c0139e5d01a7c3e722a17bacc777211a249505e0c98dfde"
	I1025 09:02:20.453530  146486 cri.go:89] found id: "3ef84406aa71455b6de1dd991735b55151760349fe2324174f32003ba3bab3a6"
	I1025 09:02:20.453533  146486 cri.go:89] found id: "bfbbb33612538e8ef5fcb258abbd95f420b5a5ac465ecbcf4d458c6bc6e2e38e"
	I1025 09:02:20.453536  146486 cri.go:89] found id: "30d14efd00c17dff3baa060c7f2eacaea9fee261ffff3a40817d920c70f7a1b1"
	I1025 09:02:20.453539  146486 cri.go:89] found id: "3c4dfd048ae14042cb2dd535dfd35f2830a4290f9ef179dd30ae8ebba1c31a9e"
	I1025 09:02:20.453544  146486 cri.go:89] found id: "7ed2f0ed5954858ab8b256dd7a28ee29951b8dc0b80a0b3be518e80869d79f4f"
	I1025 09:02:20.453547  146486 cri.go:89] found id: "9fc2a24b06ef7a84582e95e03fcb1a9f5fa59ca6e653388015de5bce16b2098b"
	I1025 09:02:20.453549  146486 cri.go:89] found id: "36e423e3e9d3f8607f12ae97290100bad7e6a20a2f191e6f20e0a9dbd1c955bd"
	I1025 09:02:20.453552  146486 cri.go:89] found id: "8f0ebcd8090442d43ac07f440e77c7fb785f836534fea4fbd3af7f5a9d5c92a3"
	I1025 09:02:20.453558  146486 cri.go:89] found id: "428c8023af396511adb70251f87e08e7a0348af7ea7b391566b9f6d720846eae"
	I1025 09:02:20.453561  146486 cri.go:89] found id: "63ac188b24d3aece814f7965aeb3fc8826585e8716e0d9712e6c24c67de79b2e"
	I1025 09:02:20.453563  146486 cri.go:89] found id: "d2cd04d0db0a96294bc519f8d661edb9555660536f71eaa38a63faa12c9ecd60"
	I1025 09:02:20.453566  146486 cri.go:89] found id: "99c81d2cbcf1373b9e986edd9cb06fe6e17af80281bd513e9c184715993690af"
	I1025 09:02:20.453568  146486 cri.go:89] found id: "9fe9c1838c296605cfd15a7d3a82dcf768d949c52d59c1d953a6e6031f8e6bb0"
	I1025 09:02:20.453571  146486 cri.go:89] found id: "a768f7fc3ff87846a8a1fc193f45e90d420c2384014e24853286ce24205e39e9"
	I1025 09:02:20.453575  146486 cri.go:89] found id: "5123be046b86f9088a95642cee7771736e4aa6d00228c4b52c9ce8fe6fc983d1"
	I1025 09:02:20.453577  146486 cri.go:89] found id: "0c53c0cc8c97408e395761582dcb19a6bd13bdb6fdb20adbe17e7425844245e6"
	I1025 09:02:20.453582  146486 cri.go:89] found id: "f6a1623c75ccd3731e08ba7c5cf4f2e2d4981b7012e2cb63e51a031c2d0839da"
	I1025 09:02:20.453584  146486 cri.go:89] found id: "856adda6d4a269f0840b32ee45117e16786dc583569513442f2836ffdeae8b23"
	I1025 09:02:20.453586  146486 cri.go:89] found id: "b61ce248f4c774901b5b79e3a742ad5afdba36e0d2fa91f7059ea628af2578fa"
	I1025 09:02:20.453588  146486 cri.go:89] found id: "d47c77a17465c61f43d01df2e570cf4f0920d4333585ba36bb3b062b0ad245b6"
	I1025 09:02:20.453591  146486 cri.go:89] found id: "8ce2136d4288fb4d8468a78bac8ea32ab90854d7bd4416ca9904da1040df01fa"
	I1025 09:02:20.453594  146486 cri.go:89] found id: "274bb680de1b51fcc087361608941e440ab97122abfb1cdd94dbb7ad5d9f4afa"
	I1025 09:02:20.453596  146486 cri.go:89] found id: "34b878e3a18d682bb517910ab586818dedf3985d76e5dfb859b8c455fef6342f"
	I1025 09:02:20.453599  146486 cri.go:89] found id: ""
	I1025 09:02:20.453637  146486 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:02:20.467501  146486 out.go:203] 
	W1025 09:02:20.468759  146486 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:02:20.468781  146486 out.go:285] * 
	* 
	W1025 09:02:20.471738  146486 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:02:20.473084  146486 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-273872 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (18.47s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.41s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.355944ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-273872
addons_test.go:332: (dbg) Run:  kubectl --context addons-273872 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-273872 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (242.038148ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:02:28.528674  147149 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:02:28.528966  147149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:28.528977  147149 out.go:374] Setting ErrFile to fd 2...
	I1025 09:02:28.528981  147149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:28.529198  147149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:02:28.529530  147149 mustload.go:65] Loading cluster: addons-273872
	I1025 09:02:28.529903  147149 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:28.529920  147149 addons.go:606] checking whether the cluster is paused
	I1025 09:02:28.530018  147149 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:28.530037  147149 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:02:28.530426  147149 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:02:28.547544  147149 ssh_runner.go:195] Run: systemctl --version
	I1025 09:02:28.547615  147149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:02:28.566086  147149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:02:28.664117  147149 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:02:28.664184  147149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:02:28.692730  147149 cri.go:89] found id: "6acc989b2a2225d89c0139e5d01a7c3e722a17bacc777211a249505e0c98dfde"
	I1025 09:02:28.692750  147149 cri.go:89] found id: "3ef84406aa71455b6de1dd991735b55151760349fe2324174f32003ba3bab3a6"
	I1025 09:02:28.692754  147149 cri.go:89] found id: "bfbbb33612538e8ef5fcb258abbd95f420b5a5ac465ecbcf4d458c6bc6e2e38e"
	I1025 09:02:28.692757  147149 cri.go:89] found id: "30d14efd00c17dff3baa060c7f2eacaea9fee261ffff3a40817d920c70f7a1b1"
	I1025 09:02:28.692759  147149 cri.go:89] found id: "3c4dfd048ae14042cb2dd535dfd35f2830a4290f9ef179dd30ae8ebba1c31a9e"
	I1025 09:02:28.692763  147149 cri.go:89] found id: "7ed2f0ed5954858ab8b256dd7a28ee29951b8dc0b80a0b3be518e80869d79f4f"
	I1025 09:02:28.692765  147149 cri.go:89] found id: "9fc2a24b06ef7a84582e95e03fcb1a9f5fa59ca6e653388015de5bce16b2098b"
	I1025 09:02:28.692767  147149 cri.go:89] found id: "36e423e3e9d3f8607f12ae97290100bad7e6a20a2f191e6f20e0a9dbd1c955bd"
	I1025 09:02:28.692769  147149 cri.go:89] found id: "8f0ebcd8090442d43ac07f440e77c7fb785f836534fea4fbd3af7f5a9d5c92a3"
	I1025 09:02:28.692775  147149 cri.go:89] found id: "428c8023af396511adb70251f87e08e7a0348af7ea7b391566b9f6d720846eae"
	I1025 09:02:28.692777  147149 cri.go:89] found id: "63ac188b24d3aece814f7965aeb3fc8826585e8716e0d9712e6c24c67de79b2e"
	I1025 09:02:28.692780  147149 cri.go:89] found id: "d2cd04d0db0a96294bc519f8d661edb9555660536f71eaa38a63faa12c9ecd60"
	I1025 09:02:28.692782  147149 cri.go:89] found id: "99c81d2cbcf1373b9e986edd9cb06fe6e17af80281bd513e9c184715993690af"
	I1025 09:02:28.692784  147149 cri.go:89] found id: "9fe9c1838c296605cfd15a7d3a82dcf768d949c52d59c1d953a6e6031f8e6bb0"
	I1025 09:02:28.692787  147149 cri.go:89] found id: "a768f7fc3ff87846a8a1fc193f45e90d420c2384014e24853286ce24205e39e9"
	I1025 09:02:28.692794  147149 cri.go:89] found id: "5123be046b86f9088a95642cee7771736e4aa6d00228c4b52c9ce8fe6fc983d1"
	I1025 09:02:28.692800  147149 cri.go:89] found id: "0c53c0cc8c97408e395761582dcb19a6bd13bdb6fdb20adbe17e7425844245e6"
	I1025 09:02:28.692805  147149 cri.go:89] found id: "f6a1623c75ccd3731e08ba7c5cf4f2e2d4981b7012e2cb63e51a031c2d0839da"
	I1025 09:02:28.692807  147149 cri.go:89] found id: "856adda6d4a269f0840b32ee45117e16786dc583569513442f2836ffdeae8b23"
	I1025 09:02:28.692810  147149 cri.go:89] found id: "b61ce248f4c774901b5b79e3a742ad5afdba36e0d2fa91f7059ea628af2578fa"
	I1025 09:02:28.692814  147149 cri.go:89] found id: "d47c77a17465c61f43d01df2e570cf4f0920d4333585ba36bb3b062b0ad245b6"
	I1025 09:02:28.692817  147149 cri.go:89] found id: "8ce2136d4288fb4d8468a78bac8ea32ab90854d7bd4416ca9904da1040df01fa"
	I1025 09:02:28.692819  147149 cri.go:89] found id: "274bb680de1b51fcc087361608941e440ab97122abfb1cdd94dbb7ad5d9f4afa"
	I1025 09:02:28.692822  147149 cri.go:89] found id: "34b878e3a18d682bb517910ab586818dedf3985d76e5dfb859b8c455fef6342f"
	I1025 09:02:28.692824  147149 cri.go:89] found id: ""
	I1025 09:02:28.692862  147149 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:02:28.706550  147149 out.go:203] 
	W1025 09:02:28.707647  147149 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:02:28.707670  147149 out.go:285] * 
	* 
	W1025 09:02:28.710667  147149 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:02:28.711902  147149 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-273872 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.41s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (148.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-273872 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-273872 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-273872 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [e0abdafe-c76b-4464-b70e-72d4f797a77c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [e0abdafe-c76b-4464-b70e-72d4f797a77c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003222863s
I1025 09:02:11.474967  134145 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-273872 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.215485173s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-273872 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-273872
helpers_test.go:243: (dbg) docker inspect addons-273872:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "26302ced5c293b1e6e8945c9f16946f94345db7d6daaf0a087b444613dce64df",
	        "Created": "2025-10-25T08:59:46.86753105Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 136174,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T08:59:46.902553266Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/26302ced5c293b1e6e8945c9f16946f94345db7d6daaf0a087b444613dce64df/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/26302ced5c293b1e6e8945c9f16946f94345db7d6daaf0a087b444613dce64df/hostname",
	        "HostsPath": "/var/lib/docker/containers/26302ced5c293b1e6e8945c9f16946f94345db7d6daaf0a087b444613dce64df/hosts",
	        "LogPath": "/var/lib/docker/containers/26302ced5c293b1e6e8945c9f16946f94345db7d6daaf0a087b444613dce64df/26302ced5c293b1e6e8945c9f16946f94345db7d6daaf0a087b444613dce64df-json.log",
	        "Name": "/addons-273872",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-273872:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-273872",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "26302ced5c293b1e6e8945c9f16946f94345db7d6daaf0a087b444613dce64df",
	                "LowerDir": "/var/lib/docker/overlay2/119d712737ccc9bc344f9d5ff06514fcf5f7ad2fb3991c70e0f1d9bfcb4c9a0d-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/119d712737ccc9bc344f9d5ff06514fcf5f7ad2fb3991c70e0f1d9bfcb4c9a0d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/119d712737ccc9bc344f9d5ff06514fcf5f7ad2fb3991c70e0f1d9bfcb4c9a0d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/119d712737ccc9bc344f9d5ff06514fcf5f7ad2fb3991c70e0f1d9bfcb4c9a0d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-273872",
	                "Source": "/var/lib/docker/volumes/addons-273872/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-273872",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-273872",
	                "name.minikube.sigs.k8s.io": "addons-273872",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8ae4f05606e055f220ae7ac42e548d8100e25c1b392de0467d91de0c72612a6b",
	            "SandboxKey": "/var/run/docker/netns/8ae4f05606e0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-273872": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:31:df:ea:fa:ac",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4be60c27eb739040f4d436760938699c48376c5ddf25f116556dbcb7845d0f03",
	                    "EndpointID": "2e0b304e85a000b356abfbc10160fa4efdfcdf5bc06d9b15a67f6b0378c74ee2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-273872",
	                        "26302ced5c29"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-273872 -n addons-273872
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-273872 logs -n 25: (1.196954616s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-059821 --alsologtostderr --binary-mirror http://127.0.0.1:34279 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-059821 │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │                     │
	│ delete  │ -p binary-mirror-059821                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-059821 │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ addons  │ disable dashboard -p addons-273872                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │                     │
	│ addons  │ enable dashboard -p addons-273872                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │                     │
	│ start   │ -p addons-273872 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 09:01 UTC │
	│ addons  │ addons-273872 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 09:01 UTC │                     │
	│ addons  │ addons-273872 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 09:02 UTC │                     │
	│ addons  │ enable headlamp -p addons-273872 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 09:02 UTC │                     │
	│ addons  │ addons-273872 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 09:02 UTC │                     │
	│ addons  │ addons-273872 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 09:02 UTC │                     │
	│ ssh     │ addons-273872 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 09:02 UTC │                     │
	│ addons  │ addons-273872 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 09:02 UTC │                     │
	│ addons  │ addons-273872 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 09:02 UTC │                     │
	│ ip      │ addons-273872 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 09:02 UTC │ 25 Oct 25 09:02 UTC │
	│ addons  │ addons-273872 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 09:02 UTC │                     │
	│ addons  │ addons-273872 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 09:02 UTC │                     │
	│ addons  │ addons-273872 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 09:02 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-273872                                                                                                                                                                                                                                                                                                                                                                                           │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 09:02 UTC │ 25 Oct 25 09:02 UTC │
	│ addons  │ addons-273872 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 09:02 UTC │                     │
	│ ssh     │ addons-273872 ssh cat /opt/local-path-provisioner/pvc-c6e0cb1d-628c-460d-83f5-992a360dc1c7_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 09:02 UTC │ 25 Oct 25 09:02 UTC │
	│ addons  │ addons-273872 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 09:02 UTC │                     │
	│ addons  │ addons-273872 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 09:02 UTC │                     │
	│ addons  │ addons-273872 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 09:02 UTC │                     │
	│ addons  │ addons-273872 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 09:02 UTC │                     │
	│ ip      │ addons-273872 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-273872        │ jenkins │ v1.37.0 │ 25 Oct 25 09:04 UTC │ 25 Oct 25 09:04 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 08:59:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 08:59:25.302653  135520 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:59:25.302788  135520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:59:25.302800  135520 out.go:374] Setting ErrFile to fd 2...
	I1025 08:59:25.302824  135520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:59:25.303013  135520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 08:59:25.303518  135520 out.go:368] Setting JSON to false
	I1025 08:59:25.304503  135520 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2509,"bootTime":1761380256,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 08:59:25.304587  135520 start.go:141] virtualization: kvm guest
	I1025 08:59:25.306318  135520 out.go:179] * [addons-273872] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 08:59:25.307787  135520 notify.go:220] Checking for updates...
	I1025 08:59:25.307854  135520 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 08:59:25.309129  135520 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:59:25.310368  135520 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 08:59:25.311831  135520 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 08:59:25.313142  135520 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 08:59:25.314279  135520 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 08:59:25.315685  135520 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:59:25.339051  135520 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 08:59:25.339126  135520 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:59:25.399606  135520 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-25 08:59:25.389797236 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:59:25.399718  135520 docker.go:318] overlay module found
	I1025 08:59:25.401302  135520 out.go:179] * Using the docker driver based on user configuration
	I1025 08:59:25.402467  135520 start.go:305] selected driver: docker
	I1025 08:59:25.402486  135520 start.go:925] validating driver "docker" against <nil>
	I1025 08:59:25.402499  135520 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 08:59:25.403076  135520 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:59:25.456120  135520 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-25 08:59:25.447335937 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:59:25.456288  135520 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 08:59:25.456593  135520 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 08:59:25.458214  135520 out.go:179] * Using Docker driver with root privileges
	I1025 08:59:25.459325  135520 cni.go:84] Creating CNI manager for ""
	I1025 08:59:25.459416  135520 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:59:25.459430  135520 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 08:59:25.459492  135520 start.go:349] cluster config:
	{Name:addons-273872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-273872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1025 08:59:25.460580  135520 out.go:179] * Starting "addons-273872" primary control-plane node in "addons-273872" cluster
	I1025 08:59:25.461533  135520 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 08:59:25.462687  135520 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 08:59:25.463672  135520 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:59:25.463706  135520 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 08:59:25.463715  135520 cache.go:58] Caching tarball of preloaded images
	I1025 08:59:25.463765  135520 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 08:59:25.463831  135520 preload.go:233] Found /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 08:59:25.463843  135520 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 08:59:25.464183  135520 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/config.json ...
	I1025 08:59:25.464209  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/config.json: {Name:mk02f39a836faf29cc021b57d97f958117e83fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:25.480121  135520 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 08:59:25.480251  135520 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 08:59:25.480272  135520 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1025 08:59:25.480277  135520 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1025 08:59:25.480290  135520 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1025 08:59:25.480300  135520 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1025 08:59:38.598302  135520 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1025 08:59:38.598355  135520 cache.go:232] Successfully downloaded all kic artifacts
	I1025 08:59:38.598431  135520 start.go:360] acquireMachinesLock for addons-273872: {Name:mk21cf68fc8ee12ca2f54ce31eed973609b4be09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 08:59:38.598576  135520 start.go:364] duration metric: took 117.791µs to acquireMachinesLock for "addons-273872"
	I1025 08:59:38.598612  135520 start.go:93] Provisioning new machine with config: &{Name:addons-273872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-273872 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 08:59:38.598690  135520 start.go:125] createHost starting for "" (driver="docker")
	I1025 08:59:38.600214  135520 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1025 08:59:38.600459  135520 start.go:159] libmachine.API.Create for "addons-273872" (driver="docker")
	I1025 08:59:38.600495  135520 client.go:168] LocalClient.Create starting
	I1025 08:59:38.600643  135520 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem
	I1025 08:59:38.729429  135520 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem
	I1025 08:59:38.943908  135520 cli_runner.go:164] Run: docker network inspect addons-273872 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 08:59:38.961138  135520 cli_runner.go:211] docker network inspect addons-273872 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 08:59:38.961221  135520 network_create.go:284] running [docker network inspect addons-273872] to gather additional debugging logs...
	I1025 08:59:38.961241  135520 cli_runner.go:164] Run: docker network inspect addons-273872
	W1025 08:59:38.977933  135520 cli_runner.go:211] docker network inspect addons-273872 returned with exit code 1
	I1025 08:59:38.977964  135520 network_create.go:287] error running [docker network inspect addons-273872]: docker network inspect addons-273872: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-273872 not found
	I1025 08:59:38.977976  135520 network_create.go:289] output of [docker network inspect addons-273872]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-273872 not found
	
	** /stderr **
	I1025 08:59:38.978091  135520 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 08:59:38.995149  135520 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f086d0}
	I1025 08:59:38.995188  135520 network_create.go:124] attempt to create docker network addons-273872 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 08:59:38.995245  135520 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-273872 addons-273872
	I1025 08:59:39.050339  135520 network_create.go:108] docker network addons-273872 192.168.49.0/24 created
	I1025 08:59:39.050384  135520 kic.go:121] calculated static IP "192.168.49.2" for the "addons-273872" container
	I1025 08:59:39.050453  135520 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 08:59:39.066286  135520 cli_runner.go:164] Run: docker volume create addons-273872 --label name.minikube.sigs.k8s.io=addons-273872 --label created_by.minikube.sigs.k8s.io=true
	I1025 08:59:39.083734  135520 oci.go:103] Successfully created a docker volume addons-273872
	I1025 08:59:39.083818  135520 cli_runner.go:164] Run: docker run --rm --name addons-273872-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-273872 --entrypoint /usr/bin/test -v addons-273872:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 08:59:42.513383  135520 cli_runner.go:217] Completed: docker run --rm --name addons-273872-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-273872 --entrypoint /usr/bin/test -v addons-273872:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (3.429486034s)
	I1025 08:59:42.513419  135520 oci.go:107] Successfully prepared a docker volume addons-273872
	I1025 08:59:42.513454  135520 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:59:42.513479  135520 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 08:59:42.513540  135520 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-273872:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 08:59:46.796684  135520 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-273872:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.283102394s)
	I1025 08:59:46.796725  135520 kic.go:203] duration metric: took 4.28324175s to extract preloaded images to volume ...
	W1025 08:59:46.796825  135520 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 08:59:46.796858  135520 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 08:59:46.796895  135520 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 08:59:46.852313  135520 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-273872 --name addons-273872 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-273872 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-273872 --network addons-273872 --ip 192.168.49.2 --volume addons-273872:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 08:59:47.113238  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Running}}
	I1025 08:59:47.132883  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 08:59:47.152554  135520 cli_runner.go:164] Run: docker exec addons-273872 stat /var/lib/dpkg/alternatives/iptables
	I1025 08:59:47.200606  135520 oci.go:144] the created container "addons-273872" has a running status.
	I1025 08:59:47.200644  135520 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa...
	I1025 08:59:47.454871  135520 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 08:59:47.481500  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 08:59:47.502750  135520 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 08:59:47.502769  135520 kic_runner.go:114] Args: [docker exec --privileged addons-273872 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 08:59:47.544130  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 08:59:47.562487  135520 machine.go:93] provisionDockerMachine start ...
	I1025 08:59:47.562589  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 08:59:47.579957  135520 main.go:141] libmachine: Using SSH client type: native
	I1025 08:59:47.580258  135520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1025 08:59:47.580277  135520 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 08:59:47.720071  135520 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-273872
	
	I1025 08:59:47.720097  135520 ubuntu.go:182] provisioning hostname "addons-273872"
	I1025 08:59:47.720148  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 08:59:47.738995  135520 main.go:141] libmachine: Using SSH client type: native
	I1025 08:59:47.739200  135520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1025 08:59:47.739215  135520 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-273872 && echo "addons-273872" | sudo tee /etc/hostname
	I1025 08:59:47.890492  135520 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-273872
	
	I1025 08:59:47.890628  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 08:59:47.908163  135520 main.go:141] libmachine: Using SSH client type: native
	I1025 08:59:47.908401  135520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1025 08:59:47.908420  135520 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-273872' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-273872/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-273872' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 08:59:48.046882  135520 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 08:59:48.046916  135520 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 08:59:48.046940  135520 ubuntu.go:190] setting up certificates
	I1025 08:59:48.046953  135520 provision.go:84] configureAuth start
	I1025 08:59:48.047008  135520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-273872
	I1025 08:59:48.064696  135520 provision.go:143] copyHostCerts
	I1025 08:59:48.064776  135520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 08:59:48.064890  135520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 08:59:48.064963  135520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 08:59:48.065017  135520 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.addons-273872 san=[127.0.0.1 192.168.49.2 addons-273872 localhost minikube]
	I1025 08:59:48.465921  135520 provision.go:177] copyRemoteCerts
	I1025 08:59:48.465980  135520 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 08:59:48.466014  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 08:59:48.483118  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 08:59:48.581538  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 08:59:48.600132  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 08:59:48.616687  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 08:59:48.633369  135520 provision.go:87] duration metric: took 586.385193ms to configureAuth
	I1025 08:59:48.633400  135520 ubuntu.go:206] setting minikube options for container-runtime
	I1025 08:59:48.633577  135520 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:59:48.633677  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 08:59:48.650609  135520 main.go:141] libmachine: Using SSH client type: native
	I1025 08:59:48.650855  135520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1025 08:59:48.650879  135520 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 08:59:48.896016  135520 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 08:59:48.896042  135520 machine.go:96] duration metric: took 1.333533169s to provisionDockerMachine
	I1025 08:59:48.896053  135520 client.go:171] duration metric: took 10.295547813s to LocalClient.Create
	I1025 08:59:48.896069  135520 start.go:167] duration metric: took 10.295613144s to libmachine.API.Create "addons-273872"
	I1025 08:59:48.896077  135520 start.go:293] postStartSetup for "addons-273872" (driver="docker")
	I1025 08:59:48.896086  135520 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 08:59:48.896134  135520 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 08:59:48.896166  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 08:59:48.914499  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 08:59:49.015610  135520 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 08:59:49.019360  135520 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 08:59:49.019384  135520 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 08:59:49.019395  135520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 08:59:49.019448  135520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 08:59:49.019471  135520 start.go:296] duration metric: took 123.388312ms for postStartSetup
	I1025 08:59:49.019754  135520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-273872
	I1025 08:59:49.037320  135520 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/config.json ...
	I1025 08:59:49.037647  135520 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 08:59:49.037698  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 08:59:49.054098  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 08:59:49.150172  135520 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 08:59:49.154363  135520 start.go:128] duration metric: took 10.555633922s to createHost
	I1025 08:59:49.154391  135520 start.go:83] releasing machines lock for "addons-273872", held for 10.555796085s
	I1025 08:59:49.154451  135520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-273872
	I1025 08:59:49.171634  135520 ssh_runner.go:195] Run: cat /version.json
	I1025 08:59:49.171674  135520 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 08:59:49.171677  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 08:59:49.171733  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 08:59:49.187878  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 08:59:49.188804  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 08:59:49.283719  135520 ssh_runner.go:195] Run: systemctl --version
	I1025 08:59:49.337939  135520 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 08:59:49.371322  135520 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 08:59:49.376105  135520 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 08:59:49.376166  135520 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 08:59:49.402180  135520 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 08:59:49.402206  135520 start.go:495] detecting cgroup driver to use...
	I1025 08:59:49.402242  135520 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 08:59:49.402297  135520 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 08:59:49.418034  135520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 08:59:49.430015  135520 docker.go:218] disabling cri-docker service (if available) ...
	I1025 08:59:49.430066  135520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 08:59:49.445750  135520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 08:59:49.462443  135520 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 08:59:49.541701  135520 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 08:59:49.626683  135520 docker.go:234] disabling docker service ...
	I1025 08:59:49.626744  135520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 08:59:49.644249  135520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 08:59:49.656610  135520 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 08:59:49.733509  135520 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 08:59:49.813722  135520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 08:59:49.826019  135520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 08:59:49.839597  135520 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 08:59:49.839664  135520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:59:49.849447  135520 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 08:59:49.849502  135520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:59:49.858219  135520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:59:49.866548  135520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:59:49.874871  135520 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 08:59:49.882850  135520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:59:49.891222  135520 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:59:49.904037  135520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:59:49.912859  135520 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 08:59:49.920257  135520 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 08:59:49.927498  135520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:59:50.003808  135520 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 08:59:50.101887  135520 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 08:59:50.101969  135520 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 08:59:50.105802  135520 start.go:563] Will wait 60s for crictl version
	I1025 08:59:50.105860  135520 ssh_runner.go:195] Run: which crictl
	I1025 08:59:50.109280  135520 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 08:59:50.133695  135520 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 08:59:50.133817  135520 ssh_runner.go:195] Run: crio --version
	I1025 08:59:50.159718  135520 ssh_runner.go:195] Run: crio --version
	I1025 08:59:50.187805  135520 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 08:59:50.188807  135520 cli_runner.go:164] Run: docker network inspect addons-273872 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 08:59:50.206638  135520 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 08:59:50.210584  135520 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 08:59:50.220445  135520 kubeadm.go:883] updating cluster {Name:addons-273872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-273872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 08:59:50.220557  135520 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:59:50.220621  135520 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 08:59:50.249434  135520 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 08:59:50.249455  135520 crio.go:433] Images already preloaded, skipping extraction
	I1025 08:59:50.249496  135520 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 08:59:50.273069  135520 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 08:59:50.273092  135520 cache_images.go:85] Images are preloaded, skipping loading
	I1025 08:59:50.273099  135520 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1025 08:59:50.273186  135520 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-273872 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-273872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 08:59:50.273245  135520 ssh_runner.go:195] Run: crio config
	I1025 08:59:50.315240  135520 cni.go:84] Creating CNI manager for ""
	I1025 08:59:50.315264  135520 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:59:50.315283  135520 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 08:59:50.315305  135520 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-273872 NodeName:addons-273872 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 08:59:50.315447  135520 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-273872"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 08:59:50.315505  135520 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 08:59:50.323728  135520 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 08:59:50.323789  135520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 08:59:50.331091  135520 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1025 08:59:50.343196  135520 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 08:59:50.357173  135520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1025 08:59:50.369011  135520 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 08:59:50.372604  135520 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 08:59:50.382469  135520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:59:50.461751  135520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 08:59:50.485767  135520 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872 for IP: 192.168.49.2
	I1025 08:59:50.485787  135520 certs.go:195] generating shared ca certs ...
	I1025 08:59:50.485807  135520 certs.go:227] acquiring lock for ca certs: {Name:mk84f00dc0ba6e3a6eb84ff47b0ea60692217fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:50.485937  135520 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key
	I1025 08:59:50.609190  135520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt ...
	I1025 08:59:50.609222  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt: {Name:mk2a0bf68b60a6c965e83a3989bb90c992cb6912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:50.609407  135520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key ...
	I1025 08:59:50.609420  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key: {Name:mk7108a76ea2395e018371973e19ff685f801980 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:50.609513  135520 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key
	I1025 08:59:50.640724  135520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt ...
	I1025 08:59:50.640752  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt: {Name:mkde7e06a909eaa2d04a061512dd265eed9be2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:50.640900  135520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key ...
	I1025 08:59:50.640911  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key: {Name:mk25b2121521c2fb0bd2ad6475a236cb9b85a15e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:50.640971  135520 certs.go:257] generating profile certs ...
	I1025 08:59:50.641023  135520 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.key
	I1025 08:59:50.641038  135520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt with IP's: []
	I1025 08:59:50.958213  135520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt ...
	I1025 08:59:50.958244  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: {Name:mk55607e53c3612a6c4997d35a7ebbdb7769e0b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:50.958435  135520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.key ...
	I1025 08:59:50.958449  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.key: {Name:mkb5eaf77dde3c7dbe83c49037b5eea1c43d9e0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:50.958518  135520 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.key.0f3ef996
	I1025 08:59:50.958544  135520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.crt.0f3ef996 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1025 08:59:51.075249  135520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.crt.0f3ef996 ...
	I1025 08:59:51.075281  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.crt.0f3ef996: {Name:mk73b952c38b492e0b6068e78abffccb1b670107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:51.075464  135520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.key.0f3ef996 ...
	I1025 08:59:51.075478  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.key.0f3ef996: {Name:mkb228675fe9a428abb98e87fb3540e4af1636d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:51.075552  135520 certs.go:382] copying /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.crt.0f3ef996 -> /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.crt
	I1025 08:59:51.075626  135520 certs.go:386] copying /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.key.0f3ef996 -> /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.key
	I1025 08:59:51.075672  135520 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/proxy-client.key
	I1025 08:59:51.075690  135520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/proxy-client.crt with IP's: []
	I1025 08:59:51.372241  135520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/proxy-client.crt ...
	I1025 08:59:51.372272  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/proxy-client.crt: {Name:mkff79941901f1aad29cee168c50a75a1746f900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:51.372453  135520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/proxy-client.key ...
	I1025 08:59:51.372467  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/proxy-client.key: {Name:mk8e9331221bd4594930359cb7deb1ed51b45a61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:51.372706  135520 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 08:59:51.372744  135520 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem (1078 bytes)
	I1025 08:59:51.372770  135520 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem (1123 bytes)
	I1025 08:59:51.372791  135520 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem (1675 bytes)
	I1025 08:59:51.373379  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 08:59:51.392061  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 08:59:51.410505  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 08:59:51.429410  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 08:59:51.447021  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 08:59:51.463912  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 08:59:51.480432  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 08:59:51.497318  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 08:59:51.514007  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 08:59:51.532115  135520 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 08:59:51.544185  135520 ssh_runner.go:195] Run: openssl version
	I1025 08:59:51.550243  135520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 08:59:51.560546  135520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:59:51.564135  135520 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:59 /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:59:51.564182  135520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:59:51.597887  135520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 08:59:51.606635  135520 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 08:59:51.610303  135520 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 08:59:51.610379  135520 kubeadm.go:400] StartCluster: {Name:addons-273872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-273872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:59:51.610448  135520 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:59:51.610488  135520 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:59:51.637025  135520 cri.go:89] found id: ""
	I1025 08:59:51.637117  135520 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 08:59:51.645244  135520 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 08:59:51.653152  135520 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 08:59:51.653209  135520 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 08:59:51.660893  135520 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 08:59:51.660909  135520 kubeadm.go:157] found existing configuration files:
	
	I1025 08:59:51.660953  135520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 08:59:51.668309  135520 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 08:59:51.668389  135520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 08:59:51.675797  135520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 08:59:51.683097  135520 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 08:59:51.683158  135520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 08:59:51.690436  135520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 08:59:51.697971  135520 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 08:59:51.698017  135520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 08:59:51.705403  135520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 08:59:51.712771  135520 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 08:59:51.712842  135520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 08:59:51.720032  135520 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 08:59:51.756375  135520 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 08:59:51.756444  135520 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 08:59:51.775660  135520 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 08:59:51.775749  135520 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 08:59:51.775809  135520 kubeadm.go:318] OS: Linux
	I1025 08:59:51.775886  135520 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 08:59:51.775968  135520 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 08:59:51.776049  135520 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 08:59:51.776124  135520 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 08:59:51.776205  135520 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 08:59:51.776273  135520 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 08:59:51.776324  135520 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 08:59:51.776411  135520 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 08:59:51.829988  135520 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 08:59:51.830125  135520 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 08:59:51.830261  135520 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 08:59:51.837375  135520 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 08:59:51.840056  135520 out.go:252]   - Generating certificates and keys ...
	I1025 08:59:51.840147  135520 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 08:59:51.840233  135520 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 08:59:52.084318  135520 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 08:59:52.116459  135520 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 08:59:52.410394  135520 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 08:59:52.684485  135520 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 08:59:52.822848  135520 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 08:59:52.823001  135520 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-273872 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 08:59:52.962777  135520 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 08:59:52.962982  135520 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-273872 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 08:59:53.026827  135520 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 08:59:53.244992  135520 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 08:59:53.335321  135520 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 08:59:53.335829  135520 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 08:59:53.899141  135520 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 08:59:54.591086  135520 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 08:59:54.892096  135520 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 08:59:55.113050  135520 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 08:59:55.260915  135520 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 08:59:55.261456  135520 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 08:59:55.265317  135520 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 08:59:55.267645  135520 out.go:252]   - Booting up control plane ...
	I1025 08:59:55.267762  135520 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 08:59:55.267872  135520 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 08:59:55.267973  135520 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 08:59:55.281116  135520 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 08:59:55.281254  135520 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 08:59:55.287559  135520 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 08:59:55.287781  135520 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 08:59:55.287845  135520 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 08:59:55.383491  135520 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 08:59:55.383630  135520 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 08:59:55.885230  135520 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.887408ms
	I1025 08:59:55.890337  135520 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 08:59:55.890489  135520 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1025 08:59:55.890585  135520 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 08:59:55.890653  135520 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 08:59:57.294507  135520 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.404050454s
	I1025 08:59:57.986132  135520 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.095773935s
	I1025 08:59:59.891487  135520 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001103158s
	I1025 08:59:59.902273  135520 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 08:59:59.911531  135520 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 08:59:59.919083  135520 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 08:59:59.919388  135520 kubeadm.go:318] [mark-control-plane] Marking the node addons-273872 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 08:59:59.926571  135520 kubeadm.go:318] [bootstrap-token] Using token: daokbe.xkqhffctwdfi006u
	I1025 08:59:59.928127  135520 out.go:252]   - Configuring RBAC rules ...
	I1025 08:59:59.928291  135520 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 08:59:59.930740  135520 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 08:59:59.935254  135520 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 08:59:59.937393  135520 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 08:59:59.940446  135520 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 08:59:59.942575  135520 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:00:00.297338  135520 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:00:00.712376  135520 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:00:01.297327  135520 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:00:01.298205  135520 kubeadm.go:318] 
	I1025 09:00:01.298303  135520 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:00:01.298329  135520 kubeadm.go:318] 
	I1025 09:00:01.298444  135520 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:00:01.298476  135520 kubeadm.go:318] 
	I1025 09:00:01.298510  135520 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:00:01.298572  135520 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:00:01.298622  135520 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:00:01.298640  135520 kubeadm.go:318] 
	I1025 09:00:01.298722  135520 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:00:01.298733  135520 kubeadm.go:318] 
	I1025 09:00:01.298815  135520 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:00:01.298833  135520 kubeadm.go:318] 
	I1025 09:00:01.298898  135520 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:00:01.298985  135520 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:00:01.299059  135520 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:00:01.299066  135520 kubeadm.go:318] 
	I1025 09:00:01.299137  135520 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:00:01.299203  135520 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:00:01.299209  135520 kubeadm.go:318] 
	I1025 09:00:01.299279  135520 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token daokbe.xkqhffctwdfi006u \
	I1025 09:00:01.299410  135520 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:6e42eae48b755d443fba2bbd8cd2499bc8de14d7e81dc26af35578c948bc74ab \
	I1025 09:00:01.299431  135520 kubeadm.go:318] 	--control-plane 
	I1025 09:00:01.299437  135520 kubeadm.go:318] 
	I1025 09:00:01.299512  135520 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:00:01.299518  135520 kubeadm.go:318] 
	I1025 09:00:01.299590  135520 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token daokbe.xkqhffctwdfi006u \
	I1025 09:00:01.299688  135520 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:6e42eae48b755d443fba2bbd8cd2499bc8de14d7e81dc26af35578c948bc74ab 
	I1025 09:00:01.302088  135520 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 09:00:01.302224  135520 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:00:01.302255  135520 cni.go:84] Creating CNI manager for ""
	I1025 09:00:01.302272  135520 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:00:01.303895  135520 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:00:01.304869  135520 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:00:01.308989  135520 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:00:01.309004  135520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:00:01.321720  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:00:01.521299  135520 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:00:01.521489  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-273872 minikube.k8s.io/updated_at=2025_10_25T09_00_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53 minikube.k8s.io/name=addons-273872 minikube.k8s.io/primary=true
	I1025 09:00:01.521496  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:01.530534  135520 ops.go:34] apiserver oom_adj: -16
	I1025 09:00:01.605453  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:02.106336  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:02.605835  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:03.106554  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:03.605509  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:04.106585  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:04.606296  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:05.106475  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:05.606449  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:06.106298  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:06.168689  135520 kubeadm.go:1113] duration metric: took 4.647294753s to wait for elevateKubeSystemPrivileges
	I1025 09:00:06.168728  135520 kubeadm.go:402] duration metric: took 14.558354848s to StartCluster
	I1025 09:00:06.168753  135520 settings.go:142] acquiring lock: {Name:mke1e64be0ec6edf2eef6e52eb10d83b59bb8c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:00:06.168907  135520 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:00:06.169582  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:00:06.169806  135520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:00:06.169853  135520 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:00:06.169893  135520 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1025 09:00:06.170039  135520 addons.go:69] Setting yakd=true in profile "addons-273872"
	I1025 09:00:06.170065  135520 addons.go:69] Setting default-storageclass=true in profile "addons-273872"
	I1025 09:00:06.170088  135520 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-273872"
	I1025 09:00:06.170096  135520 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:00:06.170105  135520 addons.go:69] Setting cloud-spanner=true in profile "addons-273872"
	I1025 09:00:06.170111  135520 addons.go:69] Setting ingress-dns=true in profile "addons-273872"
	I1025 09:00:06.170122  135520 addons.go:238] Setting addon cloud-spanner=true in "addons-273872"
	I1025 09:00:06.170088  135520 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-273872"
	I1025 09:00:06.170112  135520 addons.go:69] Setting gcp-auth=true in profile "addons-273872"
	I1025 09:00:06.170070  135520 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-273872"
	I1025 09:00:06.170141  135520 addons.go:238] Setting addon ingress-dns=true in "addons-273872"
	I1025 09:00:06.170149  135520 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-273872"
	I1025 09:00:06.170167  135520 mustload.go:65] Loading cluster: addons-273872
	I1025 09:00:06.170178  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.170184  135520 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-273872"
	I1025 09:00:06.170194  135520 addons.go:69] Setting metrics-server=true in profile "addons-273872"
	I1025 09:00:06.170209  135520 addons.go:238] Setting addon metrics-server=true in "addons-273872"
	I1025 09:00:06.170211  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.170211  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.170225  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.170101  135520 addons.go:238] Setting addon yakd=true in "addons-273872"
	I1025 09:00:06.170411  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.170445  135520 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:00:06.170576  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.170687  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.170709  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.170712  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.170733  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.170769  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.170856  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.171041  135520 addons.go:69] Setting storage-provisioner=true in profile "addons-273872"
	I1025 09:00:06.171066  135520 addons.go:238] Setting addon storage-provisioner=true in "addons-273872"
	I1025 09:00:06.171115  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.170186  135520 addons.go:69] Setting inspektor-gadget=true in profile "addons-273872"
	I1025 09:00:06.171384  135520 addons.go:238] Setting addon inspektor-gadget=true in "addons-273872"
	I1025 09:00:06.171409  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.171640  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.171887  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.172252  135520 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-273872"
	I1025 09:00:06.172274  135520 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-273872"
	I1025 09:00:06.170178  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.172436  135520 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-273872"
	I1025 09:00:06.172463  135520 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-273872"
	I1025 09:00:06.172492  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.170041  135520 addons.go:69] Setting ingress=true in profile "addons-273872"
	I1025 09:00:06.173184  135520 addons.go:238] Setting addon ingress=true in "addons-273872"
	I1025 09:00:06.173220  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.173753  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.174286  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.172890  135520 addons.go:69] Setting registry=true in profile "addons-273872"
	I1025 09:00:06.174806  135520 addons.go:238] Setting addon registry=true in "addons-273872"
	I1025 09:00:06.174836  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.172914  135520 addons.go:69] Setting registry-creds=true in profile "addons-273872"
	I1025 09:00:06.177056  135520 addons.go:238] Setting addon registry-creds=true in "addons-273872"
	I1025 09:00:06.177088  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.172766  135520 out.go:179] * Verifying Kubernetes components...
	I1025 09:00:06.172925  135520 addons.go:69] Setting volumesnapshots=true in profile "addons-273872"
	I1025 09:00:06.177928  135520 addons.go:238] Setting addon volumesnapshots=true in "addons-273872"
	I1025 09:00:06.178003  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.172933  135520 addons.go:69] Setting volcano=true in profile "addons-273872"
	I1025 09:00:06.178417  135520 addons.go:238] Setting addon volcano=true in "addons-273872"
	I1025 09:00:06.178453  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.182889  135520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:00:06.183565  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.183801  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.184561  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.185446  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.189376  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.195433  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.229980  135520 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1025 09:00:06.232027  135520 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1025 09:00:06.232050  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 09:00:06.232111  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.238452  135520 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:00:06.239620  135520 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:00:06.239641  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:00:06.239699  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.240045  135520 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1025 09:00:06.244080  135520 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 09:00:06.245631  135520 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 09:00:06.245928  135520 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1025 09:00:06.246418  135520 addons.go:238] Setting addon default-storageclass=true in "addons-273872"
	I1025 09:00:06.246463  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.246954  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.247566  135520 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 09:00:06.247580  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1025 09:00:06.247635  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.249609  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.250188  135520 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 09:00:06.250208  135520 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 09:00:06.250155  135520 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 09:00:06.250268  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.255108  135520 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1025 09:00:06.258001  135520 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 09:00:06.260018  135520 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 09:00:06.262497  135520 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 09:00:06.264145  135520 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 09:00:06.264501  135520 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 09:00:06.264525  135520 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1025 09:00:06.264604  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.266374  135520 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 09:00:06.268973  135520 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 09:00:06.268991  135520 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 09:00:06.269066  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.286405  135520 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1025 09:00:06.290120  135520 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 09:00:06.290158  135520 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 09:00:06.290240  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.290427  135520 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-273872"
	I1025 09:00:06.290469  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.291029  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.303945  135520 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1025 09:00:06.305002  135520 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1025 09:00:06.305025  135520 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1025 09:00:06.305100  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.311912  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.314239  135520 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	W1025 09:00:06.314951  135520 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1025 09:00:06.315163  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.315577  135520 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 09:00:06.315591  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1025 09:00:06.315643  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.327382  135520 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1025 09:00:06.329260  135520 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 09:00:06.329280  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1025 09:00:06.329384  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.331166  135520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:00:06.335288  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.338239  135520 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1025 09:00:06.339039  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.339463  135520 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 09:00:06.339506  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 09:00:06.339621  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.351024  135520 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1025 09:00:06.354162  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.355373  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.355971  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.357549  135520 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:00:06.358690  135520 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:00:06.360160  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.360738  135520 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 09:00:06.360756  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1025 09:00:06.360817  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.361972  135520 out.go:179]   - Using image docker.io/registry:3.0.0
	I1025 09:00:06.363074  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.364148  135520 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1025 09:00:06.366422  135520 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 09:00:06.366486  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1025 09:00:06.366567  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.369140  135520 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:00:06.369158  135520 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:00:06.369215  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.391042  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.397312  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.401867  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.405153  135520 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 09:00:06.407474  135520 out.go:179]   - Using image docker.io/busybox:stable
	I1025 09:00:06.407490  135520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:00:06.408837  135520 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 09:00:06.408855  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 09:00:06.408911  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.417408  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.427766  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.447906  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.493000  135520 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:06.493029  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1025 09:00:06.514590  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 09:00:06.515596  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:06.518090  135520 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1025 09:00:06.518114  135520 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1025 09:00:06.524290  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:00:06.524761  135520 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 09:00:06.524780  135520 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 09:00:06.541143  135520 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1025 09:00:06.541167  135520 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1025 09:00:06.542563  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 09:00:06.550866  135520 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 09:00:06.550894  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 09:00:06.551778  135520 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 09:00:06.551809  135520 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 09:00:06.574320  135520 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 09:00:06.574425  135520 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 09:00:06.578006  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 09:00:06.586617  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 09:00:06.592375  135520 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 09:00:06.592402  135520 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 09:00:06.593129  135520 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1025 09:00:06.593147  135520 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1025 09:00:06.593264  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 09:00:06.595042  135520 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 09:00:06.595060  135520 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 09:00:06.601682  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:00:06.606219  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 09:00:06.611206  135520 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 09:00:06.611290  135520 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 09:00:06.636572  135520 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 09:00:06.636690  135520 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 09:00:06.638429  135520 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1025 09:00:06.638451  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1025 09:00:06.640276  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 09:00:06.647475  135520 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 09:00:06.647496  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 09:00:06.649766  135520 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 09:00:06.649850  135520 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 09:00:06.668553  135520 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 09:00:06.668585  135520 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 09:00:06.674388  135520 node_ready.go:35] waiting up to 6m0s for node "addons-273872" to be "Ready" ...
	I1025 09:00:06.674667  135520 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1025 09:00:06.690957  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 09:00:06.712142  135520 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:00:06.712166  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 09:00:06.715013  135520 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 09:00:06.715044  135520 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 09:00:06.716075  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 09:00:06.722091  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1025 09:00:06.779588  135520 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 09:00:06.779630  135520 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 09:00:06.804966  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:00:06.848537  135520 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 09:00:06.848590  135520 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 09:00:06.919869  135520 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 09:00:06.919895  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 09:00:06.993974  135520 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 09:00:06.994081  135520 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 09:00:07.036939  135520 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 09:00:07.036961  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1025 09:00:07.104211  135520 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 09:00:07.104233  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 09:00:07.174877  135520 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 09:00:07.174903  135520 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 09:00:07.184416  135520 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-273872" context rescaled to 1 replicas
	I1025 09:00:07.249159  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1025 09:00:07.456822  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:07.456870  135520 retry.go:31] will retry after 258.693862ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:07.716074  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:07.785206  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.207153235s)
	I1025 09:00:07.785246  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.198595794s)
	I1025 09:00:07.785275  135520 addons.go:479] Verifying addon ingress=true in "addons-273872"
	I1025 09:00:07.785323  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.192039858s)
	I1025 09:00:07.785426  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.179189577s)
	I1025 09:00:07.785400  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.183653599s)
	I1025 09:00:07.785497  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.145196435s)
	I1025 09:00:07.785611  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.094622689s)
	I1025 09:00:07.785637  135520 addons.go:479] Verifying addon metrics-server=true in "addons-273872"
	I1025 09:00:07.785654  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.069548643s)
	I1025 09:00:07.785683  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.063569403s)
	I1025 09:00:07.785685  135520 addons.go:479] Verifying addon registry=true in "addons-273872"
	I1025 09:00:07.786840  135520 out.go:179] * Verifying registry addon...
	I1025 09:00:07.786865  135520 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-273872 service yakd-dashboard -n yakd-dashboard
	
	I1025 09:00:07.786840  135520 out.go:179] * Verifying ingress addon...
	I1025 09:00:07.789654  135520 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 09:00:07.789942  135520 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1025 09:00:07.795775  135520 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1025 09:00:07.796171  135520 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 09:00:07.796212  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:07.895117  135520 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 09:00:07.895141  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:08.253237  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.44820013s)
	W1025 09:00:08.253288  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 09:00:08.253313  135520 retry.go:31] will retry after 359.915373ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 09:00:08.253480  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.004213665s)
	I1025 09:00:08.253520  135520 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-273872"
	I1025 09:00:08.255809  135520 out.go:179] * Verifying csi-hostpath-driver addon...
	I1025 09:00:08.258250  135520 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 09:00:08.262447  135520 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 09:00:08.262470  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:08.363297  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:08.363535  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:00:08.422933  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:08.422967  135520 retry.go:31] will retry after 382.154852ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:08.613813  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1025 09:00:08.677113  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:08.762077  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:08.792786  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:08.792925  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:08.806048  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:09.261929  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:09.362243  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:09.362405  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:09.761499  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:09.792920  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:09.793113  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:10.261625  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:10.362373  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:10.362513  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:00:10.677554  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:10.761151  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:10.792483  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:10.792547  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:11.088008  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.474140731s)
	I1025 09:00:11.088079  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.282005281s)
	W1025 09:00:11.088101  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:11.088119  135520 retry.go:31] will retry after 338.329511ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:11.262591  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:11.293100  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:11.293250  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:11.427490  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:11.761240  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:11.792561  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:11.792713  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:00:11.956253  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:11.956284  135520 retry.go:31] will retry after 552.692268ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:12.262244  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:12.362505  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:12.362710  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:12.509894  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:12.761216  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:12.792746  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:12.792924  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:00:13.040558  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:13.040588  135520 retry.go:31] will retry after 1.629636383s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:00:13.177801  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:13.261810  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:13.293206  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:13.293438  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:13.761648  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:13.793177  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:13.793391  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:13.862588  135520 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 09:00:13.862676  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:13.880273  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:13.986299  135520 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 09:00:13.999113  135520 addons.go:238] Setting addon gcp-auth=true in "addons-273872"
	I1025 09:00:13.999169  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:13.999651  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:14.017222  135520 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 09:00:14.017272  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:14.034860  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:14.133231  135520 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1025 09:00:14.134416  135520 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:00:14.135470  135520 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 09:00:14.135487  135520 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 09:00:14.149184  135520 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 09:00:14.149204  135520 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 09:00:14.162144  135520 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 09:00:14.162167  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1025 09:00:14.174649  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 09:00:14.261606  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:14.293165  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:14.293256  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:14.471476  135520 addons.go:479] Verifying addon gcp-auth=true in "addons-273872"
	I1025 09:00:14.473197  135520 out.go:179] * Verifying gcp-auth addon...
	I1025 09:00:14.475026  135520 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 09:00:14.477407  135520 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 09:00:14.477426  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:14.670717  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:14.761882  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:14.792730  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:14.792859  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:14.978691  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:15.177865  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	W1025 09:00:15.196511  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:15.196539  135520 retry.go:31] will retry after 2.631982259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:15.261661  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:15.293110  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:15.293266  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:15.478407  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:15.762391  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:15.793036  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:15.793209  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:15.977725  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:16.261443  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:16.292905  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:16.293034  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:16.478448  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:16.761653  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:16.793278  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:16.793497  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:16.978116  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:17.177948  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:17.261451  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:17.292914  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:17.293173  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:17.477704  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:17.762303  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:17.792725  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:17.792899  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:17.828906  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:17.977972  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:18.261841  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:18.292688  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:18.292852  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:00:18.350884  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:18.350918  135520 retry.go:31] will retry after 2.879419058s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:18.478511  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:18.762173  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:18.792380  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:18.792556  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:18.977817  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:19.261864  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:19.292581  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:19.292673  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:19.478480  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:19.677047  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:19.762082  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:19.792746  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:19.792792  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:19.978422  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:20.261613  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:20.292842  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:20.292978  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:20.478519  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:20.761871  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:20.792317  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:20.792478  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:20.978236  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:21.231074  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:21.261685  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:21.293463  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:21.293577  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:21.477891  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:21.677865  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:21.761214  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:00:21.762799  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:21.762827  135520 retry.go:31] will retry after 2.17085207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:21.792564  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:21.792788  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:21.978230  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:22.261430  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:22.292703  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:22.292871  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:22.478637  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:22.761965  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:22.792431  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:22.792655  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:22.977936  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:23.261170  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:23.292692  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:23.292790  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:23.478631  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:23.761785  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:23.793466  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:23.793622  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:23.934713  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:23.978021  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:24.177637  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:24.261578  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:24.293309  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:24.293309  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1025 09:00:24.459248  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:24.459282  135520 retry.go:31] will retry after 8.224889013s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:24.477762  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:24.761056  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:24.792462  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:24.792619  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:24.978126  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:25.261498  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:25.293044  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:25.293074  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:25.477700  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:25.761214  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:25.792773  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:25.792851  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:25.978462  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:26.261650  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:26.293329  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:26.293404  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:26.477975  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:26.677575  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:26.761028  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:26.792698  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:26.792883  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:26.978283  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:27.261200  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:27.292606  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:27.292685  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:27.478394  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:27.761977  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:27.792276  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:27.792402  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:27.977819  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:28.260804  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:28.292181  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:28.292832  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:28.478414  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:28.677957  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:28.761490  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:28.793072  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:28.793151  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:28.978219  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:29.261232  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:29.292856  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:29.293071  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:29.479012  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:29.761983  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:29.792408  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:29.792507  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:29.978101  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:30.262099  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:30.292301  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:30.292456  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:30.478333  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:30.761735  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:30.792405  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:30.792922  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:30.978592  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:31.177549  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:31.260991  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:31.292725  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:31.292812  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:31.478611  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:31.762066  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:31.792579  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:31.792642  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:31.978123  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:32.261687  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:32.293106  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:32.293329  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:32.477655  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:32.684858  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:32.760774  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:32.792617  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:32.792723  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:32.977458  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:33.204224  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:33.204261  135520 retry.go:31] will retry after 11.723838383s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:33.260736  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:33.293165  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:33.293225  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:33.477819  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:33.677282  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:33.761856  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:33.792399  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:33.792528  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:33.978104  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:34.261322  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:34.292795  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:34.292957  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:34.478426  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:34.761447  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:34.793110  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:34.793262  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:34.977668  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:35.261647  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:35.293172  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:35.293365  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:35.478143  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:35.677777  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:35.761671  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:35.793229  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:35.793280  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:35.977835  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:36.261304  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:36.292916  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:36.293055  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:36.478161  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:36.761645  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:36.793179  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:36.793331  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:36.978271  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:37.261634  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:37.293450  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:37.293623  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:37.478170  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:37.677840  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:37.761508  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:37.793218  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:37.793342  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:37.978017  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:38.261235  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:38.292763  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:38.292997  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:38.478404  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:38.761490  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:38.793340  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:38.793363  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:38.977728  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:39.262177  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:39.292618  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:39.292645  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:39.478247  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:39.761174  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:39.792934  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:39.792962  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:39.978558  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:40.176859  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:40.261236  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:40.292882  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:40.292954  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:40.477707  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:40.761288  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:40.793084  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:40.793232  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:40.977462  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:41.261689  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:41.293196  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:41.293400  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:41.478061  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:41.761646  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:41.793264  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:41.793418  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:41.978046  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:42.177719  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:42.261290  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:42.292848  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:42.293027  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:42.478290  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:42.761845  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:42.792468  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:42.792476  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:42.978367  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:43.261636  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:43.293038  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:43.293155  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:43.477754  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:43.760901  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:43.792269  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:43.792379  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:43.978056  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:44.179453  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:44.260848  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:44.292630  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:44.292676  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:44.477603  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:44.760920  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:44.792787  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:44.792894  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:44.929089  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:44.978502  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:45.262021  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:45.292869  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:45.292913  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:00:45.459694  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:45.459725  135520 retry.go:31] will retry after 9.845798164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:45.478362  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:45.761898  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:45.792378  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:45.792733  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:45.978336  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:46.261720  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:46.293379  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:46.293421  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:46.478058  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:46.677874  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:46.761518  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:46.793198  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:46.793277  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:46.978046  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:47.261651  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:47.293142  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:47.293376  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:47.477812  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:47.761080  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:47.792574  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:47.792771  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:47.978314  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:48.177607  135520 node_ready.go:49] node "addons-273872" is "Ready"
	I1025 09:00:48.177644  135520 node_ready.go:38] duration metric: took 41.503220016s for node "addons-273872" to be "Ready" ...
	I1025 09:00:48.177676  135520 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:00:48.177738  135520 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:00:48.196437  135520 api_server.go:72] duration metric: took 42.026542072s to wait for apiserver process to appear ...
	I1025 09:00:48.196469  135520 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:00:48.196501  135520 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 09:00:48.201371  135520 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 09:00:48.202255  135520 api_server.go:141] control plane version: v1.34.1
	I1025 09:00:48.202283  135520 api_server.go:131] duration metric: took 5.804933ms to wait for apiserver health ...
	I1025 09:00:48.202295  135520 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:00:48.205896  135520 system_pods.go:59] 20 kube-system pods found
	I1025 09:00:48.205922  135520 system_pods.go:61] "amd-gpu-device-plugin-p8cjx" [7df88268-84bc-4cef-97da-8345d34f20d3] Pending
	I1025 09:00:48.205927  135520 system_pods.go:61] "coredns-66bc5c9577-gnhvz" [67796c5e-4dcd-4172-ba92-ecc25b3c5414] Pending
	I1025 09:00:48.205931  135520 system_pods.go:61] "csi-hostpath-attacher-0" [7bccad85-6c5d-44e1-9233-41446de6398a] Pending
	I1025 09:00:48.205935  135520 system_pods.go:61] "csi-hostpath-resizer-0" [588ade2d-170f-4c01-b826-205218e4d48f] Pending
	I1025 09:00:48.205938  135520 system_pods.go:61] "csi-hostpathplugin-p89jc" [eb1f8157-cd16-4765-8677-21cbafc12beb] Pending
	I1025 09:00:48.205942  135520 system_pods.go:61] "etcd-addons-273872" [0edd4187-dc77-4982-b770-8190b76988fb] Running
	I1025 09:00:48.205946  135520 system_pods.go:61] "kindnet-x8plr" [39bc0880-5a63-47b5-b14a-3781d261f34c] Running
	I1025 09:00:48.205953  135520 system_pods.go:61] "kube-apiserver-addons-273872" [3097efdc-7bf0-41f4-9918-ca201dce37e3] Running
	I1025 09:00:48.205964  135520 system_pods.go:61] "kube-controller-manager-addons-273872" [9167ae2d-493a-4c44-b92d-2a728d1fe2b9] Running
	I1025 09:00:48.205974  135520 system_pods.go:61] "kube-ingress-dns-minikube" [77a37ede-f2e7-4344-a23b-57828fe944f2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:00:48.205980  135520 system_pods.go:61] "kube-proxy-fzsmf" [f65747a8-c743-4556-9204-2237e85f7161] Running
	I1025 09:00:48.205990  135520 system_pods.go:61] "kube-scheduler-addons-273872" [84dfeadb-16fd-460d-aab1-ce37af243e51] Running
	I1025 09:00:48.205997  135520 system_pods.go:61] "metrics-server-85b7d694d7-jm2zb" [bd49c1cd-fde4-48b8-9120-c799d302450e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:00:48.206002  135520 system_pods.go:61] "nvidia-device-plugin-daemonset-6dmpz" [bcd43d18-a3a8-4a82-9fc3-425548e2e636] Pending
	I1025 09:00:48.206028  135520 system_pods.go:61] "registry-6b586f9694-9qs7h" [ab90902a-730d-4265-a4c9-7e84180f5480] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:00:48.206034  135520 system_pods.go:61] "registry-creds-764b6fb674-7gfht" [10616cc6-5266-4eaf-b6cf-f732ba0431ed] Pending
	I1025 09:00:48.206038  135520 system_pods.go:61] "registry-proxy-s6vt6" [e6258e29-be09-4ade-b9f6-99c705fbac83] Pending
	I1025 09:00:48.206043  135520 system_pods.go:61] "snapshot-controller-7d9fbc56b8-sb8v4" [a3b37afa-feea-434e-8287-0cfab3e89fef] Pending
	I1025 09:00:48.206048  135520 system_pods.go:61] "snapshot-controller-7d9fbc56b8-thtbp" [58932137-1365-4542-b308-e09868a9098c] Pending
	I1025 09:00:48.206057  135520 system_pods.go:61] "storage-provisioner" [8c2cfee7-a45c-4a36-8c4a-10818c0656de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:00:48.206064  135520 system_pods.go:74] duration metric: took 3.762579ms to wait for pod list to return data ...
	I1025 09:00:48.206078  135520 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:00:48.208586  135520 default_sa.go:45] found service account: "default"
	I1025 09:00:48.208608  135520 default_sa.go:55] duration metric: took 2.523169ms for default service account to be created ...
	I1025 09:00:48.208617  135520 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:00:48.222237  135520 system_pods.go:86] 20 kube-system pods found
	I1025 09:00:48.222279  135520 system_pods.go:89] "amd-gpu-device-plugin-p8cjx" [7df88268-84bc-4cef-97da-8345d34f20d3] Pending
	I1025 09:00:48.222293  135520 system_pods.go:89] "coredns-66bc5c9577-gnhvz" [67796c5e-4dcd-4172-ba92-ecc25b3c5414] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:00:48.222299  135520 system_pods.go:89] "csi-hostpath-attacher-0" [7bccad85-6c5d-44e1-9233-41446de6398a] Pending
	I1025 09:00:48.222308  135520 system_pods.go:89] "csi-hostpath-resizer-0" [588ade2d-170f-4c01-b826-205218e4d48f] Pending
	I1025 09:00:48.222313  135520 system_pods.go:89] "csi-hostpathplugin-p89jc" [eb1f8157-cd16-4765-8677-21cbafc12beb] Pending
	I1025 09:00:48.222320  135520 system_pods.go:89] "etcd-addons-273872" [0edd4187-dc77-4982-b770-8190b76988fb] Running
	I1025 09:00:48.222336  135520 system_pods.go:89] "kindnet-x8plr" [39bc0880-5a63-47b5-b14a-3781d261f34c] Running
	I1025 09:00:48.222341  135520 system_pods.go:89] "kube-apiserver-addons-273872" [3097efdc-7bf0-41f4-9918-ca201dce37e3] Running
	I1025 09:00:48.222364  135520 system_pods.go:89] "kube-controller-manager-addons-273872" [9167ae2d-493a-4c44-b92d-2a728d1fe2b9] Running
	I1025 09:00:48.222375  135520 system_pods.go:89] "kube-ingress-dns-minikube" [77a37ede-f2e7-4344-a23b-57828fe944f2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:00:48.222381  135520 system_pods.go:89] "kube-proxy-fzsmf" [f65747a8-c743-4556-9204-2237e85f7161] Running
	I1025 09:00:48.222388  135520 system_pods.go:89] "kube-scheduler-addons-273872" [84dfeadb-16fd-460d-aab1-ce37af243e51] Running
	I1025 09:00:48.222400  135520 system_pods.go:89] "metrics-server-85b7d694d7-jm2zb" [bd49c1cd-fde4-48b8-9120-c799d302450e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:00:48.222410  135520 system_pods.go:89] "nvidia-device-plugin-daemonset-6dmpz" [bcd43d18-a3a8-4a82-9fc3-425548e2e636] Pending
	I1025 09:00:48.222418  135520 system_pods.go:89] "registry-6b586f9694-9qs7h" [ab90902a-730d-4265-a4c9-7e84180f5480] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:00:48.222429  135520 system_pods.go:89] "registry-creds-764b6fb674-7gfht" [10616cc6-5266-4eaf-b6cf-f732ba0431ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:00:48.222434  135520 system_pods.go:89] "registry-proxy-s6vt6" [e6258e29-be09-4ade-b9f6-99c705fbac83] Pending
	I1025 09:00:48.222445  135520 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sb8v4" [a3b37afa-feea-434e-8287-0cfab3e89fef] Pending
	I1025 09:00:48.222450  135520 system_pods.go:89] "snapshot-controller-7d9fbc56b8-thtbp" [58932137-1365-4542-b308-e09868a9098c] Pending
	I1025 09:00:48.222458  135520 system_pods.go:89] "storage-provisioner" [8c2cfee7-a45c-4a36-8c4a-10818c0656de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:00:48.222481  135520 retry.go:31] will retry after 188.968671ms: missing components: kube-dns
	I1025 09:00:48.260997  135520 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 09:00:48.261034  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:48.292313  135520 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 09:00:48.292335  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:48.292379  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:48.418636  135520 system_pods.go:86] 20 kube-system pods found
	I1025 09:00:48.418674  135520 system_pods.go:89] "amd-gpu-device-plugin-p8cjx" [7df88268-84bc-4cef-97da-8345d34f20d3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 09:00:48.418685  135520 system_pods.go:89] "coredns-66bc5c9577-gnhvz" [67796c5e-4dcd-4172-ba92-ecc25b3c5414] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:00:48.418694  135520 system_pods.go:89] "csi-hostpath-attacher-0" [7bccad85-6c5d-44e1-9233-41446de6398a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:00:48.418704  135520 system_pods.go:89] "csi-hostpath-resizer-0" [588ade2d-170f-4c01-b826-205218e4d48f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:00:48.418713  135520 system_pods.go:89] "csi-hostpathplugin-p89jc" [eb1f8157-cd16-4765-8677-21cbafc12beb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 09:00:48.418719  135520 system_pods.go:89] "etcd-addons-273872" [0edd4187-dc77-4982-b770-8190b76988fb] Running
	I1025 09:00:48.418727  135520 system_pods.go:89] "kindnet-x8plr" [39bc0880-5a63-47b5-b14a-3781d261f34c] Running
	I1025 09:00:48.418735  135520 system_pods.go:89] "kube-apiserver-addons-273872" [3097efdc-7bf0-41f4-9918-ca201dce37e3] Running
	I1025 09:00:48.418741  135520 system_pods.go:89] "kube-controller-manager-addons-273872" [9167ae2d-493a-4c44-b92d-2a728d1fe2b9] Running
	I1025 09:00:48.418755  135520 system_pods.go:89] "kube-ingress-dns-minikube" [77a37ede-f2e7-4344-a23b-57828fe944f2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:00:48.418764  135520 system_pods.go:89] "kube-proxy-fzsmf" [f65747a8-c743-4556-9204-2237e85f7161] Running
	I1025 09:00:48.418774  135520 system_pods.go:89] "kube-scheduler-addons-273872" [84dfeadb-16fd-460d-aab1-ce37af243e51] Running
	I1025 09:00:48.418792  135520 system_pods.go:89] "metrics-server-85b7d694d7-jm2zb" [bd49c1cd-fde4-48b8-9120-c799d302450e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:00:48.418805  135520 system_pods.go:89] "nvidia-device-plugin-daemonset-6dmpz" [bcd43d18-a3a8-4a82-9fc3-425548e2e636] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:00:48.418817  135520 system_pods.go:89] "registry-6b586f9694-9qs7h" [ab90902a-730d-4265-a4c9-7e84180f5480] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:00:48.418828  135520 system_pods.go:89] "registry-creds-764b6fb674-7gfht" [10616cc6-5266-4eaf-b6cf-f732ba0431ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:00:48.418839  135520 system_pods.go:89] "registry-proxy-s6vt6" [e6258e29-be09-4ade-b9f6-99c705fbac83] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:00:48.418849  135520 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sb8v4" [a3b37afa-feea-434e-8287-0cfab3e89fef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:00:48.418861  135520 system_pods.go:89] "snapshot-controller-7d9fbc56b8-thtbp" [58932137-1365-4542-b308-e09868a9098c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:00:48.418868  135520 system_pods.go:89] "storage-provisioner" [8c2cfee7-a45c-4a36-8c4a-10818c0656de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:00:48.418888  135520 retry.go:31] will retry after 254.310097ms: missing components: kube-dns
	I1025 09:00:48.517852  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:48.678137  135520 system_pods.go:86] 20 kube-system pods found
	I1025 09:00:48.678175  135520 system_pods.go:89] "amd-gpu-device-plugin-p8cjx" [7df88268-84bc-4cef-97da-8345d34f20d3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 09:00:48.678185  135520 system_pods.go:89] "coredns-66bc5c9577-gnhvz" [67796c5e-4dcd-4172-ba92-ecc25b3c5414] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:00:48.678199  135520 system_pods.go:89] "csi-hostpath-attacher-0" [7bccad85-6c5d-44e1-9233-41446de6398a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:00:48.678207  135520 system_pods.go:89] "csi-hostpath-resizer-0" [588ade2d-170f-4c01-b826-205218e4d48f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:00:48.678214  135520 system_pods.go:89] "csi-hostpathplugin-p89jc" [eb1f8157-cd16-4765-8677-21cbafc12beb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 09:00:48.678220  135520 system_pods.go:89] "etcd-addons-273872" [0edd4187-dc77-4982-b770-8190b76988fb] Running
	I1025 09:00:48.678225  135520 system_pods.go:89] "kindnet-x8plr" [39bc0880-5a63-47b5-b14a-3781d261f34c] Running
	I1025 09:00:48.678230  135520 system_pods.go:89] "kube-apiserver-addons-273872" [3097efdc-7bf0-41f4-9918-ca201dce37e3] Running
	I1025 09:00:48.678235  135520 system_pods.go:89] "kube-controller-manager-addons-273872" [9167ae2d-493a-4c44-b92d-2a728d1fe2b9] Running
	I1025 09:00:48.678252  135520 system_pods.go:89] "kube-ingress-dns-minikube" [77a37ede-f2e7-4344-a23b-57828fe944f2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:00:48.678258  135520 system_pods.go:89] "kube-proxy-fzsmf" [f65747a8-c743-4556-9204-2237e85f7161] Running
	I1025 09:00:48.678266  135520 system_pods.go:89] "kube-scheduler-addons-273872" [84dfeadb-16fd-460d-aab1-ce37af243e51] Running
	I1025 09:00:48.678274  135520 system_pods.go:89] "metrics-server-85b7d694d7-jm2zb" [bd49c1cd-fde4-48b8-9120-c799d302450e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:00:48.678282  135520 system_pods.go:89] "nvidia-device-plugin-daemonset-6dmpz" [bcd43d18-a3a8-4a82-9fc3-425548e2e636] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:00:48.678289  135520 system_pods.go:89] "registry-6b586f9694-9qs7h" [ab90902a-730d-4265-a4c9-7e84180f5480] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:00:48.678298  135520 system_pods.go:89] "registry-creds-764b6fb674-7gfht" [10616cc6-5266-4eaf-b6cf-f732ba0431ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:00:48.678305  135520 system_pods.go:89] "registry-proxy-s6vt6" [e6258e29-be09-4ade-b9f6-99c705fbac83] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:00:48.678315  135520 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sb8v4" [a3b37afa-feea-434e-8287-0cfab3e89fef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:00:48.678323  135520 system_pods.go:89] "snapshot-controller-7d9fbc56b8-thtbp" [58932137-1365-4542-b308-e09868a9098c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:00:48.678329  135520 system_pods.go:89] "storage-provisioner" [8c2cfee7-a45c-4a36-8c4a-10818c0656de] Running
	I1025 09:00:48.678340  135520 system_pods.go:126] duration metric: took 469.71697ms to wait for k8s-apps to be running ...
	I1025 09:00:48.678364  135520 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:00:48.678418  135520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:00:48.694439  135520 system_svc.go:56] duration metric: took 16.066599ms WaitForService to wait for kubelet
	I1025 09:00:48.694476  135520 kubeadm.go:586] duration metric: took 42.524583043s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:00:48.694495  135520 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:00:48.696730  135520 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:00:48.696754  135520 node_conditions.go:123] node cpu capacity is 8
	I1025 09:00:48.696766  135520 node_conditions.go:105] duration metric: took 2.26683ms to run NodePressure ...
	I1025 09:00:48.696786  135520 start.go:241] waiting for startup goroutines ...
	I1025 09:00:48.777284  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:48.792746  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:48.792952  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:48.978939  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:49.262476  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:49.293564  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:49.293664  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:49.479454  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:49.761468  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:49.793163  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:49.793175  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:49.978947  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:50.262106  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:50.292709  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:50.292888  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:50.478634  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:50.762289  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:50.793005  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:50.793047  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:50.980081  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:51.261511  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:51.293899  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:51.293925  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:51.478904  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:51.762816  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:51.863084  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:51.863279  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:51.979548  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:52.261786  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:52.293680  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:52.293728  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:52.478919  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:52.762713  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:52.793432  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:52.793597  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:52.978398  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:53.261561  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:53.293491  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:53.293776  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:53.479076  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:53.762622  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:53.793579  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:53.793794  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:53.978465  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:54.261721  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:54.293538  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:54.293685  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:54.478274  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:54.761708  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:54.862175  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:54.862215  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:54.978717  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:55.261985  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:55.293001  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:55.293112  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:55.306234  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:55.478822  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:55.762371  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:55.793376  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:55.793551  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:00:55.915829  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:55.915866  135520 retry.go:31] will retry after 24.295474081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:55.978675  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:56.262331  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:56.292983  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:56.293017  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:56.479028  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:56.762980  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:56.792447  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:56.792672  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:56.979051  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:57.262177  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:57.292995  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:57.293373  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:57.477710  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:57.762321  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:57.792996  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:57.793132  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:57.977477  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:58.261444  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:58.293071  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:58.293076  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:58.478894  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:58.763015  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:58.793976  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:58.794857  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:58.978778  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:59.261740  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:59.293284  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:59.293496  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:59.533648  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:59.762014  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:59.792642  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:59.792715  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:59.978495  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:00.261468  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:00.292999  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:00.293198  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:00.477816  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:00.762800  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:00.792601  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:00.792620  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:00.978310  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:01.261144  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:01.292993  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:01.293014  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:01.477385  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:01.761556  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:01.793020  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:01.793143  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:01.977934  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:02.262496  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:02.292846  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:02.292912  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:02.478491  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:02.761705  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:02.793154  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:02.793189  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:02.977935  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:03.261941  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:03.292820  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:03.292851  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:03.478501  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:03.761322  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:03.792849  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:03.792886  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:03.978709  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:04.262110  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:04.292466  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:04.292631  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:04.477978  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:04.762585  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:04.793028  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:04.793188  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:04.977668  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:05.261649  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:05.293440  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:05.293526  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:05.478214  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:05.761208  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:05.792907  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:05.792939  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:05.978288  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:06.261404  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:06.293011  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:06.293030  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:06.477800  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:06.762483  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:06.792997  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:06.793046  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:06.978475  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:07.261579  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:07.293714  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:07.293938  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:07.478559  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:07.762081  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:07.792837  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:07.792952  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:07.978814  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:08.262683  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:08.294340  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:08.294426  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:08.477762  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:08.761939  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:08.792395  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:08.793027  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:08.977973  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:09.262455  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:09.293897  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:09.293934  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:09.478737  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:09.762224  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:09.792929  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:09.793156  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:09.978743  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:10.261668  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:10.293303  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:10.293313  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:10.478285  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:10.761614  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:10.793253  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:10.793341  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:10.978073  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:11.262701  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:11.363562  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:11.363634  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:11.478045  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:11.761747  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:11.862626  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:11.862722  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:11.978087  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:12.262147  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:12.292698  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:12.292758  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:12.479438  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:12.761874  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:12.863465  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:12.863608  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:12.978505  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:13.261805  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:13.293764  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:13.293809  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:13.478605  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:13.761951  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:13.793533  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:13.793577  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:13.977715  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:14.261805  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:14.293281  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:14.293281  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:14.478396  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:14.761496  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:14.862450  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:14.862453  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:14.978284  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:15.261488  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:15.293450  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:15.293501  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:15.478386  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:15.762410  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:15.862137  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:15.862321  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:15.977783  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:16.262390  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:16.293255  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:16.293446  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:16.478061  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:16.762492  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:16.863058  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:16.863150  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:16.977517  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:17.261815  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:17.362748  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:17.362746  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:17.478287  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:17.760938  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:17.793804  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:17.794032  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:17.978691  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:18.261794  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:18.293047  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:18.293177  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:18.477336  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:18.761706  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:18.793156  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:18.793286  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:18.977963  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:19.261968  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:19.293014  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:19.293234  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:19.527700  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:19.762071  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:19.792399  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:19.792485  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:19.978390  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:20.211498  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:01:20.263544  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:20.297383  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:20.298193  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:20.479510  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:20.765725  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:20.797218  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:20.797396  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:20.979226  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:01:21.146603  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:01:21.146702  135520 retry.go:31] will retry after 22.055491936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:01:21.262472  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:21.293552  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:21.294595  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:21.478551  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:21.762374  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:21.794329  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:21.794559  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:21.978850  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:22.262476  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:22.293773  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:22.293826  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:22.478808  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:22.864058  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:22.864940  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:22.864954  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:23.106515  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:23.261327  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:23.292958  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:23.293131  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:23.478948  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:23.762366  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:23.793234  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:23.793307  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:23.978006  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:24.262110  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:24.293410  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:24.293490  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:24.478473  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:24.761758  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:24.793744  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:24.793793  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:24.979098  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:25.262284  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:25.292935  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:25.293267  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:25.478928  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:25.762417  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:25.793836  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:25.794800  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:25.979027  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:26.262465  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:26.293537  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:26.363835  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:26.478112  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:26.762957  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:26.793630  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:26.793636  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:26.978988  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:27.261951  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:27.292595  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:27.292643  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:27.478078  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:27.762042  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:27.792926  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:27.792994  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:27.978892  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:28.262276  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:28.293015  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:28.293224  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:28.478134  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:28.761462  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:28.793280  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:28.793332  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:28.978097  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:29.262098  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:29.293817  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:29.294088  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:29.478076  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:29.762214  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:29.793136  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:29.793240  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:29.978379  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:30.262472  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:30.293269  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:30.293342  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:30.478993  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:30.762097  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:30.792657  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:30.792889  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:30.978392  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:31.261149  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:31.292741  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:31.292966  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:31.478498  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:31.761647  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:31.792885  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:31.792972  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:31.978885  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:32.262780  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:32.293433  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:32.293473  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:32.479432  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:32.761277  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:32.793808  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:32.793989  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:32.978469  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:33.344038  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:33.344059  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:33.344038  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:33.478443  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:33.761910  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:33.793195  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:33.793230  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:33.978275  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:34.261053  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:34.292766  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:34.292772  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:34.478228  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:34.761435  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:34.793341  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:34.793399  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:34.978376  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:35.261563  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:35.293037  135520 kapi.go:107] duration metric: took 1m27.503089224s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 09:01:35.294037  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:35.477901  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:35.762342  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:35.793486  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:35.979033  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:36.261369  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:36.292995  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:36.478812  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:36.762335  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:36.792851  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:36.982031  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:37.260561  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:37.292642  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:37.477938  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:37.762115  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:37.792377  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:37.977886  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:38.261853  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:38.292508  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:38.478414  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:38.761645  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:38.793283  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:38.978289  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:39.261747  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:39.293117  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:39.478003  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:39.762084  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:39.792824  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:39.978270  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:40.262558  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:40.293681  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:40.477886  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:40.762215  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:40.793929  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:40.980253  135520 kapi.go:107] duration metric: took 1m26.505222003s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 09:01:40.982496  135520 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-273872 cluster.
	I1025 09:01:40.983825  135520 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 09:01:40.985009  135520 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 09:01:41.263717  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:41.294027  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:41.792423  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:41.793331  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:42.262282  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:42.293157  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:42.761506  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:42.793388  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:43.202687  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:01:43.263043  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:43.292863  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:43.762538  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:43.793484  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:01:43.924194  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:01:43.924330  135520 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1025 09:01:44.261082  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:44.292675  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:44.762817  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:44.793948  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:45.262324  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:45.292926  135520 kapi.go:107] duration metric: took 1m37.503271119s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 09:01:45.762468  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:46.262195  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:46.764737  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:47.262376  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:47.762064  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:48.262367  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:48.761950  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:49.261883  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:49.762615  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:50.262159  135520 kapi.go:107] duration metric: took 1m42.003909786s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 09:01:50.263861  135520 out.go:179] * Enabled addons: cloud-spanner, storage-provisioner, amd-gpu-device-plugin, registry-creds, ingress-dns, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1025 09:01:50.264953  135520 addons.go:514] duration metric: took 1m44.09506125s for enable addons: enabled=[cloud-spanner storage-provisioner amd-gpu-device-plugin registry-creds ingress-dns nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1025 09:01:50.264994  135520 start.go:246] waiting for cluster config update ...
	I1025 09:01:50.265012  135520 start.go:255] writing updated cluster config ...
	I1025 09:01:50.265286  135520 ssh_runner.go:195] Run: rm -f paused
	I1025 09:01:50.269217  135520 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:01:50.272398  135520 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gnhvz" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:50.276111  135520 pod_ready.go:94] pod "coredns-66bc5c9577-gnhvz" is "Ready"
	I1025 09:01:50.276132  135520 pod_ready.go:86] duration metric: took 3.712979ms for pod "coredns-66bc5c9577-gnhvz" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:50.277821  135520 pod_ready.go:83] waiting for pod "etcd-addons-273872" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:50.281087  135520 pod_ready.go:94] pod "etcd-addons-273872" is "Ready"
	I1025 09:01:50.281104  135520 pod_ready.go:86] duration metric: took 3.266141ms for pod "etcd-addons-273872" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:50.282841  135520 pod_ready.go:83] waiting for pod "kube-apiserver-addons-273872" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:50.286066  135520 pod_ready.go:94] pod "kube-apiserver-addons-273872" is "Ready"
	I1025 09:01:50.286086  135520 pod_ready.go:86] duration metric: took 3.227704ms for pod "kube-apiserver-addons-273872" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:50.287611  135520 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-273872" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:50.673707  135520 pod_ready.go:94] pod "kube-controller-manager-addons-273872" is "Ready"
	I1025 09:01:50.673734  135520 pod_ready.go:86] duration metric: took 386.103591ms for pod "kube-controller-manager-addons-273872" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:50.874296  135520 pod_ready.go:83] waiting for pod "kube-proxy-fzsmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:51.272842  135520 pod_ready.go:94] pod "kube-proxy-fzsmf" is "Ready"
	I1025 09:01:51.272870  135520 pod_ready.go:86] duration metric: took 398.548365ms for pod "kube-proxy-fzsmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:51.473285  135520 pod_ready.go:83] waiting for pod "kube-scheduler-addons-273872" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:51.872963  135520 pod_ready.go:94] pod "kube-scheduler-addons-273872" is "Ready"
	I1025 09:01:51.872993  135520 pod_ready.go:86] duration metric: took 399.682236ms for pod "kube-scheduler-addons-273872" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:51.873004  135520 pod_ready.go:40] duration metric: took 1.603759497s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:01:51.919578  135520 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:01:51.922306  135520 out.go:179] * Done! kubectl is now configured to use "addons-273872" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:03:02 addons-273872 crio[769]: time="2025-10-25T09:03:02.559173778Z" level=info msg="Pulling image: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=e86551e4-8a06-4b7e-9acd-b442cc73f2f9 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:03:02 addons-273872 crio[769]: time="2025-10-25T09:03:02.564508818Z" level=info msg="Trying to access \"docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\""
	Oct 25 09:03:04 addons-273872 crio[769]: time="2025-10-25T09:03:04.115586769Z" level=info msg="Pulled image: docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=e86551e4-8a06-4b7e-9acd-b442cc73f2f9 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:03:04 addons-273872 crio[769]: time="2025-10-25T09:03:04.116240291Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=b83d4673-71a0-44b6-84f1-075ff021a984 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:03:04 addons-273872 crio[769]: time="2025-10-25T09:03:04.149812856Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=cb7b8b25-62c7-4bb1-84cb-242f0805ad5c name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:03:04 addons-273872 crio[769]: time="2025-10-25T09:03:04.153641724Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-7gfht/registry-creds" id=bef34615-59e1-4d32-b82a-e7ef65170aca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:03:04 addons-273872 crio[769]: time="2025-10-25T09:03:04.153773147Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:03:04 addons-273872 crio[769]: time="2025-10-25T09:03:04.160987883Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:03:04 addons-273872 crio[769]: time="2025-10-25T09:03:04.161643023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:03:04 addons-273872 crio[769]: time="2025-10-25T09:03:04.190666074Z" level=info msg="Created container ce286d7940d4cf8bcd1c288bb5b14221bd7bbb6b479f38c75025db52ca126ae3: kube-system/registry-creds-764b6fb674-7gfht/registry-creds" id=bef34615-59e1-4d32-b82a-e7ef65170aca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:03:04 addons-273872 crio[769]: time="2025-10-25T09:03:04.191227749Z" level=info msg="Starting container: ce286d7940d4cf8bcd1c288bb5b14221bd7bbb6b479f38c75025db52ca126ae3" id=42ee8e19-f525-430e-bd9a-564fe59827f8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:03:04 addons-273872 crio[769]: time="2025-10-25T09:03:04.192929977Z" level=info msg="Started container" PID=8966 containerID=ce286d7940d4cf8bcd1c288bb5b14221bd7bbb6b479f38c75025db52ca126ae3 description=kube-system/registry-creds-764b6fb674-7gfht/registry-creds id=42ee8e19-f525-430e-bd9a-564fe59827f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dbed440473df956cfcdaf305c8272336eba42d3862089efeb1a12300db701e8c
	Oct 25 09:04:28 addons-273872 crio[769]: time="2025-10-25T09:04:28.130459865Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-dh8x5/POD" id=fefd752d-6224-4b08-aa70-621750af864a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:04:28 addons-273872 crio[769]: time="2025-10-25T09:04:28.130573143Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:04:28 addons-273872 crio[769]: time="2025-10-25T09:04:28.138539121Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-dh8x5 Namespace:default ID:8ef4293a89e03f34e781f5a896d0df5a02f0a0d44906a15f16655404bee8e4f7 UID:91577d48-3fb0-41c4-95db-75c6f29b7a82 NetNS:/var/run/netns/a58bb183-63d8-4a16-aae1-e2c86c501458 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009041d0}] Aliases:map[]}"
	Oct 25 09:04:28 addons-273872 crio[769]: time="2025-10-25T09:04:28.138571219Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-dh8x5 to CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:04:28 addons-273872 crio[769]: time="2025-10-25T09:04:28.149170657Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-dh8x5 Namespace:default ID:8ef4293a89e03f34e781f5a896d0df5a02f0a0d44906a15f16655404bee8e4f7 UID:91577d48-3fb0-41c4-95db-75c6f29b7a82 NetNS:/var/run/netns/a58bb183-63d8-4a16-aae1-e2c86c501458 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009041d0}] Aliases:map[]}"
	Oct 25 09:04:28 addons-273872 crio[769]: time="2025-10-25T09:04:28.149313594Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-dh8x5 for CNI network kindnet (type=ptp)"
	Oct 25 09:04:28 addons-273872 crio[769]: time="2025-10-25T09:04:28.150183283Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:04:28 addons-273872 crio[769]: time="2025-10-25T09:04:28.151064272Z" level=info msg="Ran pod sandbox 8ef4293a89e03f34e781f5a896d0df5a02f0a0d44906a15f16655404bee8e4f7 with infra container: default/hello-world-app-5d498dc89-dh8x5/POD" id=fefd752d-6224-4b08-aa70-621750af864a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:04:28 addons-273872 crio[769]: time="2025-10-25T09:04:28.152340217Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=420eb5b5-ab3b-4076-b76e-976e48b30a53 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:04:28 addons-273872 crio[769]: time="2025-10-25T09:04:28.152461415Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=420eb5b5-ab3b-4076-b76e-976e48b30a53 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:04:28 addons-273872 crio[769]: time="2025-10-25T09:04:28.152496231Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=420eb5b5-ab3b-4076-b76e-976e48b30a53 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:04:28 addons-273872 crio[769]: time="2025-10-25T09:04:28.153118433Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=e9f3683c-d0d6-4eab-872e-f27983b1156f name=/runtime.v1.ImageService/PullImage
	Oct 25 09:04:28 addons-273872 crio[769]: time="2025-10-25T09:04:28.157197815Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	ce286d7940d4c       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   dbed440473df9       registry-creds-764b6fb674-7gfht             kube-system
	78e55145bcf7b       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                                              2 minutes ago        Running             nginx                                    0                   a47a200e14904       nginx                                       default
	1749e24523753       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   940b088fd28d4       busybox                                     default
	6acc989b2a222       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago        Running             csi-snapshotter                          0                   209f02eecd80a       csi-hostpathplugin-p89jc                    kube-system
	3ef84406aa714       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago        Running             csi-provisioner                          0                   209f02eecd80a       csi-hostpathplugin-p89jc                    kube-system
	bfbbb33612538       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago        Running             liveness-probe                           0                   209f02eecd80a       csi-hostpathplugin-p89jc                    kube-system
	30d14efd00c17       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago        Running             hostpath                                 0                   209f02eecd80a       csi-hostpathplugin-p89jc                    kube-system
	3c4dfd048ae14       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago        Running             node-driver-registrar                    0                   209f02eecd80a       csi-hostpathplugin-p89jc                    kube-system
	f51be06bc9d9a       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             2 minutes ago        Running             controller                               0                   ad679187e79fa       ingress-nginx-controller-675c5ddd98-cdlhj   ingress-nginx
	faaf1bf843a53       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago        Running             gcp-auth                                 0                   65acc4502e968       gcp-auth-78565c9fb4-bjgg6                   gcp-auth
	2f8611e2aa0a5       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            2 minutes ago        Running             gadget                                   0                   1da69cccffc52       gadget-w9btk                                gadget
	7ed2f0ed59548       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago        Running             registry-proxy                           0                   671dfbc3079e8       registry-proxy-s6vt6                        kube-system
	9fc2a24b06ef7       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     2 minutes ago        Running             amd-gpu-device-plugin                    0                   29d19f77dbdad       amd-gpu-device-plugin-p8cjx                 kube-system
	36e423e3e9d3f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   209f02eecd80a       csi-hostpathplugin-p89jc                    kube-system
	8f0ebcd809044       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   54fafde4dff43       csi-hostpath-resizer-0                      kube-system
	428c8023af396       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   181959880ed6f       nvidia-device-plugin-daemonset-6dmpz        kube-system
	63ac188b24d3a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   3118dd0712a37       snapshot-controller-7d9fbc56b8-sb8v4        kube-system
	d2cd04d0db0a9       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   245307580b748       snapshot-controller-7d9fbc56b8-thtbp        kube-system
	99c81d2cbcf13       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   3d5d4c3375257       csi-hostpath-attacher-0                     kube-system
	75730271f6dcb       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   9e3ca0825d44e       yakd-dashboard-5ff678cb9-8sg9x              yakd-dashboard
	5bc8ff2063cf8       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   e2d9833da479f       local-path-provisioner-648f6765c9-8n6qc     local-path-storage
	97f200163e1f9       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             3 minutes ago        Exited              patch                                    1                   80b6573db0131       ingress-nginx-admission-patch-gvs8h         ingress-nginx
	3d0c32677f602       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago        Exited              create                                   0                   95fdb6cafe5f8       ingress-nginx-admission-create-l8qdq        ingress-nginx
	4762e9db22498       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               3 minutes ago        Running             cloud-spanner-emulator                   0                   92b2a09ddbff3       cloud-spanner-emulator-86bd5cbb97-x46xr     default
	9fe9c1838c296       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   1d1830dbf3bae       registry-6b586f9694-9qs7h                   kube-system
	a768f7fc3ff87       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   fa21de2c894d2       kube-ingress-dns-minikube                   kube-system
	5123be046b86f       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   d6cffe5fc010a       metrics-server-85b7d694d7-jm2zb             kube-system
	0c53c0cc8c974       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago        Running             coredns                                  0                   7d7627b4e7252       coredns-66bc5c9577-gnhvz                    kube-system
	f6a1623c75ccd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago        Running             storage-provisioner                      0                   7c3b51bbe148f       storage-provisioner                         kube-system
	856adda6d4a26       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago        Running             kube-proxy                               0                   b92fca7a94ff2       kube-proxy-fzsmf                            kube-system
	b61ce248f4c77       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago        Running             kindnet-cni                              0                   9ddfdf290b4ba       kindnet-x8plr                               kube-system
	d47c77a17465c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago        Running             kube-controller-manager                  0                   00e09e0e45579       kube-controller-manager-addons-273872       kube-system
	8ce2136d4288f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago        Running             kube-apiserver                           0                   81c807f6e2343       kube-apiserver-addons-273872                kube-system
	274bb680de1b5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago        Running             kube-scheduler                           0                   0775bed4e43fc       kube-scheduler-addons-273872                kube-system
	34b878e3a18d6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago        Running             etcd                                     0                   94e1c19d814a7       etcd-addons-273872                          kube-system
	
	
	==> coredns [0c53c0cc8c97408e395761582dcb19a6bd13bdb6fdb20adbe17e7425844245e6] <==
	[INFO] 10.244.0.22:47162 - 2185 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006321325s
	[INFO] 10.244.0.22:33420 - 7076 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004364661s
	[INFO] 10.244.0.22:41774 - 49004 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006786433s
	[INFO] 10.244.0.22:53211 - 13893 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004433016s
	[INFO] 10.244.0.22:46833 - 56818 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.01649052s
	[INFO] 10.244.0.22:41965 - 18753 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.000927067s
	[INFO] 10.244.0.22:41302 - 28425 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001032374s
	[INFO] 10.244.0.26:36299 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000236095s
	[INFO] 10.244.0.26:57529 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000182341s
	[INFO] 10.244.0.31:60091 - 34517 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000223076s
	[INFO] 10.244.0.31:37576 - 23446 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000272299s
	[INFO] 10.244.0.31:39938 - 40232 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000155811s
	[INFO] 10.244.0.31:36817 - 43238 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000153422s
	[INFO] 10.244.0.31:43106 - 54485 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000128055s
	[INFO] 10.244.0.31:58864 - 61233 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000203906s
	[INFO] 10.244.0.31:52832 - 19304 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003498996s
	[INFO] 10.244.0.31:35725 - 59273 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003544633s
	[INFO] 10.244.0.31:49854 - 19867 "AAAA IN accounts.google.com.europe-west2-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.00476351s
	[INFO] 10.244.0.31:52267 - 37224 "A IN accounts.google.com.europe-west2-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.00566553s
	[INFO] 10.244.0.31:46609 - 29340 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004367027s
	[INFO] 10.244.0.31:56623 - 44308 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004471246s
	[INFO] 10.244.0.31:40739 - 31954 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.00452309s
	[INFO] 10.244.0.31:50048 - 58168 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004636831s
	[INFO] 10.244.0.31:41155 - 47237 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.00179254s
	[INFO] 10.244.0.31:52897 - 53812 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001895932s
	
	
	==> describe nodes <==
	Name:               addons-273872
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-273872
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=addons-273872
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_00_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-273872
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-273872"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 08:59:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-273872
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:04:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:03:54 +0000   Sat, 25 Oct 2025 08:59:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:03:54 +0000   Sat, 25 Oct 2025 08:59:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:03:54 +0000   Sat, 25 Oct 2025 08:59:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:03:54 +0000   Sat, 25 Oct 2025 09:00:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-273872
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                30c17162-2f74-4668-9bd8-3fa3eed59df9
	  Boot ID:                    69cac88c-fbae-449a-9884-8eb99653f5b9
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  default                     cloud-spanner-emulator-86bd5cbb97-x46xr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  default                     hello-world-app-5d498dc89-dh8x5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  gadget                      gadget-w9btk                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  gcp-auth                    gcp-auth-78565c9fb4-bjgg6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-cdlhj    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m22s
	  kube-system                 amd-gpu-device-plugin-p8cjx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 coredns-66bc5c9577-gnhvz                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m23s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 csi-hostpathplugin-p89jc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-addons-273872                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m29s
	  kube-system                 kindnet-x8plr                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m24s
	  kube-system                 kube-apiserver-addons-273872                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-controller-manager-addons-273872        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-proxy-fzsmf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-scheduler-addons-273872                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 metrics-server-85b7d694d7-jm2zb              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m22s
	  kube-system                 nvidia-device-plugin-daemonset-6dmpz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 registry-6b586f9694-9qs7h                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 registry-creds-764b6fb674-7gfht              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 registry-proxy-s6vt6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 snapshot-controller-7d9fbc56b8-sb8v4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 snapshot-controller-7d9fbc56b8-thtbp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  local-path-storage          local-path-provisioner-648f6765c9-8n6qc      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-8sg9x               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m21s  kube-proxy       
	  Normal  Starting                 4m29s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m29s  kubelet          Node addons-273872 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m29s  kubelet          Node addons-273872 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m29s  kubelet          Node addons-273872 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m25s  node-controller  Node addons-273872 event: Registered Node addons-273872 in Controller
	  Normal  NodeReady                3m41s  kubelet          Node addons-273872 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 1c f5 68 9f 00 08 06
	[  +4.451388] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0e 07 4a e3 be 93 08 06
	[Oct25 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.025995] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.023888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.023905] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.024896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.022924] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +2.047850] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +4.031640] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +8.511323] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[ +16.382644] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[Oct25 09:03] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	
	
	==> etcd [34b878e3a18d682bb517910ab586818dedf3985d76e5dfb859b8c455fef6342f] <==
	{"level":"warn","ts":"2025-10-25T09:00:35.008265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52896","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:01:19.942004Z","caller":"traceutil/trace.go:172","msg":"trace[385402812] transaction","detail":"{read_only:false; response_revision:1089; number_of_response:1; }","duration":"127.936139ms","start":"2025-10-25T09:01:19.814048Z","end":"2025-10-25T09:01:19.941984Z","steps":["trace[385402812] 'process raft request'  (duration: 127.768354ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:01:19.944124Z","caller":"traceutil/trace.go:172","msg":"trace[1503550808] transaction","detail":"{read_only:false; response_revision:1090; number_of_response:1; }","duration":"118.287648ms","start":"2025-10-25T09:01:19.825821Z","end":"2025-10-25T09:01:19.944108Z","steps":["trace[1503550808] 'process raft request'  (duration: 118.203528ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:01:22.862028Z","caller":"traceutil/trace.go:172","msg":"trace[110641307] linearizableReadLoop","detail":"{readStateIndex:1136; appliedIndex:1136; }","duration":"101.438047ms","start":"2025-10-25T09:01:22.760562Z","end":"2025-10-25T09:01:22.862001Z","steps":["trace[110641307] 'read index received'  (duration: 101.426419ms)","trace[110641307] 'applied index is now lower than readState.Index'  (duration: 9.94µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T09:01:22.862166Z","caller":"traceutil/trace.go:172","msg":"trace[2060150914] transaction","detail":"{read_only:false; response_revision:1104; number_of_response:1; }","duration":"111.216441ms","start":"2025-10-25T09:01:22.750932Z","end":"2025-10-25T09:01:22.862149Z","steps":["trace[2060150914] 'process raft request'  (duration: 111.096749ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:01:22.862144Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.568854ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T09:01:22.862309Z","caller":"traceutil/trace.go:172","msg":"trace[887181409] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1103; }","duration":"101.752743ms","start":"2025-10-25T09:01:22.760548Z","end":"2025-10-25T09:01:22.862301Z","steps":["trace[887181409] 'agreement among raft nodes before linearized reading'  (duration: 101.533774ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:01:23.105200Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.690578ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T09:01:23.105267Z","caller":"traceutil/trace.go:172","msg":"trace[1660278697] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1105; }","duration":"127.769442ms","start":"2025-10-25T09:01:22.977482Z","end":"2025-10-25T09:01:23.105252Z","steps":["trace[1660278697] 'range keys from in-memory index tree'  (duration: 127.603929ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:01:42.116698Z","caller":"traceutil/trace.go:172","msg":"trace[11700191] transaction","detail":"{read_only:false; response_revision:1185; number_of_response:1; }","duration":"102.644126ms","start":"2025-10-25T09:01:42.014036Z","end":"2025-10-25T09:01:42.116680Z","steps":["trace[11700191] 'process raft request'  (duration: 76.522583ms)","trace[11700191] 'compare'  (duration: 26.01942ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T09:01:49.113436Z","caller":"traceutil/trace.go:172","msg":"trace[1921208394] transaction","detail":"{read_only:false; response_revision:1227; number_of_response:1; }","duration":"102.717747ms","start":"2025-10-25T09:01:49.010697Z","end":"2025-10-25T09:01:49.113415Z","steps":["trace[1921208394] 'process raft request'  (duration: 102.583669ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:01:49.254160Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.672139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/gcp-auth/gcp-auth-certs-patch\" limit:1 ","response":"range_response_count:1 size:3214"}
	{"level":"info","ts":"2025-10-25T09:01:49.254250Z","caller":"traceutil/trace.go:172","msg":"trace[744852649] range","detail":"{range_begin:/registry/jobs/gcp-auth/gcp-auth-certs-patch; range_end:; response_count:1; response_revision:1227; }","duration":"158.788212ms","start":"2025-10-25T09:01:49.095442Z","end":"2025-10-25T09:01:49.254230Z","steps":["trace[744852649] 'agreement among raft nodes before linearized reading'  (duration: 82.107289ms)","trace[744852649] 'range keys from in-memory index tree'  (duration: 76.510222ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T09:01:49.254482Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.961204ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-5gx27\" limit:1 ","response":"range_response_count:1 size:4154"}
	{"level":"warn","ts":"2025-10-25T09:01:49.254584Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.785814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-create-dcnjn\" limit:1 ","response":"range_response_count:1 size:4158"}
	{"level":"info","ts":"2025-10-25T09:01:49.254643Z","caller":"traceutil/trace.go:172","msg":"trace[600584683] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-patch-5gx27; range_end:; response_count:1; response_revision:1227; }","duration":"159.138546ms","start":"2025-10-25T09:01:49.095489Z","end":"2025-10-25T09:01:49.254627Z","steps":["trace[600584683] 'agreement among raft nodes before linearized reading'  (duration: 82.044588ms)","trace[600584683] 'range keys from in-memory index tree'  (duration: 76.607087ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T09:01:49.254684Z","caller":"traceutil/trace.go:172","msg":"trace[254071811] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-create-dcnjn; range_end:; response_count:1; response_revision:1228; }","duration":"138.898117ms","start":"2025-10-25T09:01:49.115770Z","end":"2025-10-25T09:01:49.254668Z","steps":["trace[254071811] 'agreement among raft nodes before linearized reading'  (duration: 138.687442ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:01:49.254480Z","caller":"traceutil/trace.go:172","msg":"trace[963662017] transaction","detail":"{read_only:false; response_revision:1228; number_of_response:1; }","duration":"161.893534ms","start":"2025-10-25T09:01:49.092563Z","end":"2025-10-25T09:01:49.254457Z","steps":["trace[963662017] 'process raft request'  (duration: 85.020756ms)","trace[963662017] 'compare'  (duration: 76.617336ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T09:01:49.254630Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.841144ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" limit:1 ","response":"range_response_count:1 size:3215"}
	{"level":"info","ts":"2025-10-25T09:01:49.254819Z","caller":"traceutil/trace.go:172","msg":"trace[1352818719] range","detail":"{range_begin:/registry/jobs/gcp-auth/gcp-auth-certs-create; range_end:; response_count:1; response_revision:1228; }","duration":"139.03074ms","start":"2025-10-25T09:01:49.115776Z","end":"2025-10-25T09:01:49.254807Z","steps":["trace[1352818719] 'agreement among raft nodes before linearized reading'  (duration: 138.772308ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:02:20.895771Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.276341ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/pvc-c6e0cb1d-628c-460d-83f5-992a360dc1c7\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T09:02:20.895845Z","caller":"traceutil/trace.go:172","msg":"trace[693556874] range","detail":"{range_begin:/registry/persistentvolumes/pvc-c6e0cb1d-628c-460d-83f5-992a360dc1c7; range_end:; response_count:0; response_revision:1360; }","duration":"133.367068ms","start":"2025-10-25T09:02:20.762463Z","end":"2025-10-25T09:02:20.895831Z","steps":["trace[693556874] 'agreement among raft nodes before linearized reading'  (duration: 47.029527ms)","trace[693556874] 'range keys from in-memory index tree'  (duration: 86.217401ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T09:02:20.896041Z","caller":"traceutil/trace.go:172","msg":"trace[2057758097] transaction","detail":"{read_only:false; response_revision:1361; number_of_response:1; }","duration":"133.603773ms","start":"2025-10-25T09:02:20.762418Z","end":"2025-10-25T09:02:20.896021Z","steps":["trace[2057758097] 'process raft request'  (duration: 47.079738ms)","trace[2057758097] 'compare'  (duration: 86.251225ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T09:02:20.896065Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.946548ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" limit:1 ","response":"range_response_count:1 size:1412"}
	{"level":"info","ts":"2025-10-25T09:02:20.896105Z","caller":"traceutil/trace.go:172","msg":"trace[387561836] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1361; }","duration":"108.996696ms","start":"2025-10-25T09:02:20.787099Z","end":"2025-10-25T09:02:20.896096Z","steps":["trace[387561836] 'agreement among raft nodes before linearized reading'  (duration: 108.849575ms)"],"step_count":1}
	
	
	==> gcp-auth [faaf1bf843a53afa00f74d85e4bf45d6889a94f6a92148211d9bdb5f583ad0b1] <==
	2025/10/25 09:01:40 GCP Auth Webhook started!
	2025/10/25 09:01:52 Ready to marshal response ...
	2025/10/25 09:01:52 Ready to write response ...
	2025/10/25 09:01:52 Ready to marshal response ...
	2025/10/25 09:01:52 Ready to write response ...
	2025/10/25 09:01:52 Ready to marshal response ...
	2025/10/25 09:01:52 Ready to write response ...
	2025/10/25 09:02:02 Ready to marshal response ...
	2025/10/25 09:02:02 Ready to write response ...
	2025/10/25 09:02:12 Ready to marshal response ...
	2025/10/25 09:02:12 Ready to write response ...
	2025/10/25 09:02:12 Ready to marshal response ...
	2025/10/25 09:02:12 Ready to write response ...
	2025/10/25 09:02:20 Ready to marshal response ...
	2025/10/25 09:02:20 Ready to write response ...
	2025/10/25 09:02:20 Ready to marshal response ...
	2025/10/25 09:02:20 Ready to write response ...
	2025/10/25 09:02:27 Ready to marshal response ...
	2025/10/25 09:02:27 Ready to write response ...
	2025/10/25 09:02:30 Ready to marshal response ...
	2025/10/25 09:02:30 Ready to write response ...
	2025/10/25 09:04:27 Ready to marshal response ...
	2025/10/25 09:04:27 Ready to write response ...
	
	
	==> kernel <==
	 09:04:29 up 46 min,  0 user,  load average: 0.94, 1.76, 1.50
	Linux addons-273872 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b61ce248f4c774901b5b79e3a742ad5afdba36e0d2fa91f7059ea628af2578fa] <==
	I1025 09:02:27.798761       1 main.go:301] handling current node
	I1025 09:02:37.798517       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:02:37.798549       1 main.go:301] handling current node
	I1025 09:02:47.802752       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:02:47.802781       1 main.go:301] handling current node
	I1025 09:02:57.802979       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:02:57.803009       1 main.go:301] handling current node
	I1025 09:03:07.799685       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:03:07.799716       1 main.go:301] handling current node
	I1025 09:03:17.802210       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:03:17.802245       1 main.go:301] handling current node
	I1025 09:03:27.804745       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:03:27.804773       1 main.go:301] handling current node
	I1025 09:03:37.801139       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:03:37.801169       1 main.go:301] handling current node
	I1025 09:03:47.800581       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:03:47.800616       1 main.go:301] handling current node
	I1025 09:03:57.800579       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:03:57.800614       1 main.go:301] handling current node
	I1025 09:04:07.798685       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:04:07.798716       1 main.go:301] handling current node
	I1025 09:04:17.804828       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:04:17.804874       1 main.go:301] handling current node
	I1025 09:04:27.800451       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:04:27.800489       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8ce2136d4288fb4d8468a78bac8ea32ab90854d7bd4416ca9904da1040df01fa] <==
	 > logger="UnhandledError"
	E1025 09:00:51.699595       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.129.188:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.129.188:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.129.188:443: connect: connection refused" logger="UnhandledError"
	E1025 09:00:51.701684       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.129.188:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.129.188:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.129.188:443: connect: connection refused" logger="UnhandledError"
	W1025 09:00:52.700583       1 handler_proxy.go:99] no RequestInfo found in the context
	W1025 09:00:52.700601       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 09:00:52.700634       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1025 09:00:52.700651       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1025 09:00:52.700685       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1025 09:00:52.701824       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1025 09:00:56.712994       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 09:00:56.713042       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1025 09:00:56.713082       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.129.188:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.129.188:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1025 09:00:56.723600       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1025 09:02:01.599118       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59182: use of closed network connection
	E1025 09:02:01.751204       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59194: use of closed network connection
	I1025 09:02:02.264187       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1025 09:02:02.464752       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.222.47"}
	I1025 09:02:21.124830       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1025 09:04:27.889638       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.21.181"}
	
	
	==> kube-controller-manager [d47c77a17465c61f43d01df2e570cf4f0920d4333585ba36bb3b062b0ad245b6] <==
	I1025 09:00:04.961216       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:00:04.962207       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:00:04.962229       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:00:04.962597       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:00:04.962623       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:00:04.962657       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:00:04.962706       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:00:04.962730       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:00:04.962785       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:00:04.962792       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:00:04.963092       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:00:04.963215       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:00:04.963691       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:00:04.967728       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:00:04.977887       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:00:04.986428       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1025 09:00:07.339509       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1025 09:00:34.971660       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1025 09:00:34.971798       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1025 09:00:34.971838       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1025 09:00:34.993579       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1025 09:00:34.996575       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1025 09:00:35.072847       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:00:35.097446       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:00:49.901371       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [856adda6d4a269f0840b32ee45117e16786dc583569513442f2836ffdeae8b23] <==
	I1025 09:00:07.372875       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:00:07.494779       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:00:07.596005       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:00:07.596052       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:00:07.596152       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:00:07.622141       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:00:07.622196       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:00:07.629090       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:00:07.629614       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:00:07.629659       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:00:07.631247       1 config.go:200] "Starting service config controller"
	I1025 09:00:07.632381       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:00:07.631706       1 config.go:309] "Starting node config controller"
	I1025 09:00:07.632421       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:00:07.632428       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:00:07.631919       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:00:07.632436       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:00:07.631935       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:00:07.632451       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:00:07.732699       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:00:07.732714       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:00:07.733460       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [274bb680de1b51fcc087361608941e440ab97122abfb1cdd94dbb7ad5d9f4afa] <==
	E1025 08:59:57.983340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 08:59:57.983527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 08:59:57.983557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 08:59:57.983595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 08:59:57.983618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 08:59:57.983629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 08:59:57.983653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 08:59:57.984226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 08:59:57.984582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 08:59:57.984610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 08:59:57.984608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 08:59:57.984765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 08:59:58.847305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 08:59:58.860538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 08:59:58.874627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 08:59:58.884845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 08:59:58.891793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 08:59:58.923966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 08:59:58.940085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 08:59:58.968242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 08:59:58.981848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 08:59:58.998859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 08:59:59.128439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 08:59:59.155498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1025 08:59:59.579712       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:02:36 addons-273872 kubelet[1284]: I1025 09:02:36.844338    1284 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwfjl\" (UniqueName: \"kubernetes.io/projected/435e028f-5218-438f-a41b-90373e744241-kube-api-access-pwfjl\") pod \"435e028f-5218-438f-a41b-90373e744241\" (UID: \"435e028f-5218-438f-a41b-90373e744241\") "
	Oct 25 09:02:36 addons-273872 kubelet[1284]: I1025 09:02:36.844462    1284 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/435e028f-5218-438f-a41b-90373e744241-gcp-creds\") on node \"addons-273872\" DevicePath \"\""
	Oct 25 09:02:36 addons-273872 kubelet[1284]: I1025 09:02:36.846860    1284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435e028f-5218-438f-a41b-90373e744241-kube-api-access-pwfjl" (OuterVolumeSpecName: "kube-api-access-pwfjl") pod "435e028f-5218-438f-a41b-90373e744241" (UID: "435e028f-5218-438f-a41b-90373e744241"). InnerVolumeSpecName "kube-api-access-pwfjl". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 25 09:02:36 addons-273872 kubelet[1284]: I1025 09:02:36.847424    1284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^546357f9-b181-11f0-82b2-16d2885f559d" (OuterVolumeSpecName: "task-pv-storage") pod "435e028f-5218-438f-a41b-90373e744241" (UID: "435e028f-5218-438f-a41b-90373e744241"). InnerVolumeSpecName "pvc-bed16055-1c64-4c56-9d31-c5adebecbb7e". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Oct 25 09:02:36 addons-273872 kubelet[1284]: I1025 09:02:36.944810    1284 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-bed16055-1c64-4c56-9d31-c5adebecbb7e\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^546357f9-b181-11f0-82b2-16d2885f559d\") on node \"addons-273872\" "
	Oct 25 09:02:36 addons-273872 kubelet[1284]: I1025 09:02:36.944858    1284 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pwfjl\" (UniqueName: \"kubernetes.io/projected/435e028f-5218-438f-a41b-90373e744241-kube-api-access-pwfjl\") on node \"addons-273872\" DevicePath \"\""
	Oct 25 09:02:36 addons-273872 kubelet[1284]: I1025 09:02:36.950020    1284 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-bed16055-1c64-4c56-9d31-c5adebecbb7e" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^546357f9-b181-11f0-82b2-16d2885f559d") on node "addons-273872"
	Oct 25 09:02:37 addons-273872 kubelet[1284]: I1025 09:02:37.045815    1284 reconciler_common.go:299] "Volume detached for volume \"pvc-bed16055-1c64-4c56-9d31-c5adebecbb7e\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^546357f9-b181-11f0-82b2-16d2885f559d\") on node \"addons-273872\" DevicePath \"\""
	Oct 25 09:02:37 addons-273872 kubelet[1284]: I1025 09:02:37.139601    1284 scope.go:117] "RemoveContainer" containerID="5ca4d882b186bd758abbde7abff8983c412257068a611a3a1ca17e4f1452fcff"
	Oct 25 09:02:37 addons-273872 kubelet[1284]: I1025 09:02:37.149246    1284 scope.go:117] "RemoveContainer" containerID="5ca4d882b186bd758abbde7abff8983c412257068a611a3a1ca17e4f1452fcff"
	Oct 25 09:02:37 addons-273872 kubelet[1284]: E1025 09:02:37.149617    1284 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ca4d882b186bd758abbde7abff8983c412257068a611a3a1ca17e4f1452fcff\": container with ID starting with 5ca4d882b186bd758abbde7abff8983c412257068a611a3a1ca17e4f1452fcff not found: ID does not exist" containerID="5ca4d882b186bd758abbde7abff8983c412257068a611a3a1ca17e4f1452fcff"
	Oct 25 09:02:37 addons-273872 kubelet[1284]: I1025 09:02:37.149662    1284 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca4d882b186bd758abbde7abff8983c412257068a611a3a1ca17e4f1452fcff"} err="failed to get container status \"5ca4d882b186bd758abbde7abff8983c412257068a611a3a1ca17e4f1452fcff\": rpc error: code = NotFound desc = could not find container \"5ca4d882b186bd758abbde7abff8983c412257068a611a3a1ca17e4f1452fcff\": container with ID starting with 5ca4d882b186bd758abbde7abff8983c412257068a611a3a1ca17e4f1452fcff not found: ID does not exist"
	Oct 25 09:02:38 addons-273872 kubelet[1284]: I1025 09:02:38.538656    1284 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="435e028f-5218-438f-a41b-90373e744241" path="/var/lib/kubelet/pods/435e028f-5218-438f-a41b-90373e744241/volumes"
	Oct 25 09:02:41 addons-273872 kubelet[1284]: I1025 09:02:41.537038    1284 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-6dmpz" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:02:47 addons-273872 kubelet[1284]: I1025 09:02:47.536207    1284 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-s6vt6" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:02:51 addons-273872 kubelet[1284]: E1025 09:02:51.150631    1284 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-7gfht" podUID="10616cc6-5266-4eaf-b6cf-f732ba0431ed"
	Oct 25 09:03:00 addons-273872 kubelet[1284]: I1025 09:03:00.552233    1284 scope.go:117] "RemoveContainer" containerID="b8d710c66bd9d30d53e3a34cea468864fe4e695d6f44aa80dd8eeba9a235213f"
	Oct 25 09:03:00 addons-273872 kubelet[1284]: I1025 09:03:00.561018    1284 scope.go:117] "RemoveContainer" containerID="316421e5bd5809928e679ffd0f3b458202f5e73d048607a6501b8d90d0462b2f"
	Oct 25 09:03:04 addons-273872 kubelet[1284]: I1025 09:03:04.254197    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-7gfht" podStartSLOduration=176.695711163 podStartE2EDuration="2m58.254176946s" podCreationTimestamp="2025-10-25 09:00:06 +0000 UTC" firstStartedPulling="2025-10-25 09:03:02.558813708 +0000 UTC m=+182.102920893" lastFinishedPulling="2025-10-25 09:03:04.117279508 +0000 UTC m=+183.661386676" observedRunningTime="2025-10-25 09:03:04.253284602 +0000 UTC m=+183.797391778" watchObservedRunningTime="2025-10-25 09:03:04.254176946 +0000 UTC m=+183.798284125"
	Oct 25 09:03:36 addons-273872 kubelet[1284]: I1025 09:03:36.536929    1284 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-p8cjx" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:03:52 addons-273872 kubelet[1284]: I1025 09:03:52.537155    1284 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-s6vt6" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:04:02 addons-273872 kubelet[1284]: I1025 09:04:02.537069    1284 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-6dmpz" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:04:27 addons-273872 kubelet[1284]: I1025 09:04:27.828552    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/91577d48-3fb0-41c4-95db-75c6f29b7a82-gcp-creds\") pod \"hello-world-app-5d498dc89-dh8x5\" (UID: \"91577d48-3fb0-41c4-95db-75c6f29b7a82\") " pod="default/hello-world-app-5d498dc89-dh8x5"
	Oct 25 09:04:27 addons-273872 kubelet[1284]: I1025 09:04:27.828696    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zqsc\" (UniqueName: \"kubernetes.io/projected/91577d48-3fb0-41c4-95db-75c6f29b7a82-kube-api-access-6zqsc\") pod \"hello-world-app-5d498dc89-dh8x5\" (UID: \"91577d48-3fb0-41c4-95db-75c6f29b7a82\") " pod="default/hello-world-app-5d498dc89-dh8x5"
	Oct 25 09:04:29 addons-273872 kubelet[1284]: I1025 09:04:29.563179    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-dh8x5" podStartSLOduration=1.279129948 podStartE2EDuration="2.563158618s" podCreationTimestamp="2025-10-25 09:04:27 +0000 UTC" firstStartedPulling="2025-10-25 09:04:28.152784388 +0000 UTC m=+267.696891543" lastFinishedPulling="2025-10-25 09:04:29.436813052 +0000 UTC m=+268.980920213" observedRunningTime="2025-10-25 09:04:29.562193146 +0000 UTC m=+269.106300322" watchObservedRunningTime="2025-10-25 09:04:29.563158618 +0000 UTC m=+269.107265795"
	
	
	==> storage-provisioner [f6a1623c75ccd3731e08ba7c5cf4f2e2d4981b7012e2cb63e51a031c2d0839da] <==
	W1025 09:04:03.732259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:05.735445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:05.739749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:07.742619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:07.746018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:09.749335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:09.753046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:11.756016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:11.759783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:13.762852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:13.766317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:15.769289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:15.772935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:17.775775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:17.780425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:19.784132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:19.787653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:21.791220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:21.795708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:23.798873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:23.803304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:25.806615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:25.810478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:27.814333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:04:27.820161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-273872 -n addons-273872
helpers_test.go:269: (dbg) Run:  kubectl --context addons-273872 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-l8qdq ingress-nginx-admission-patch-gvs8h
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-273872 describe pod ingress-nginx-admission-create-l8qdq ingress-nginx-admission-patch-gvs8h
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-273872 describe pod ingress-nginx-admission-create-l8qdq ingress-nginx-admission-patch-gvs8h: exit status 1 (58.924837ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-l8qdq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gvs8h" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-273872 describe pod ingress-nginx-admission-create-l8qdq ingress-nginx-admission-patch-gvs8h: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-273872 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (259.565546ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:04:30.452299  149985 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:04:30.452696  149985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:04:30.452712  149985 out.go:374] Setting ErrFile to fd 2...
	I1025 09:04:30.452720  149985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:04:30.453058  149985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:04:30.453467  149985 mustload.go:65] Loading cluster: addons-273872
	I1025 09:04:30.453976  149985 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:04:30.454003  149985 addons.go:606] checking whether the cluster is paused
	I1025 09:04:30.454141  149985 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:04:30.454166  149985 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:04:30.454760  149985 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:04:30.473335  149985 ssh_runner.go:195] Run: systemctl --version
	I1025 09:04:30.473417  149985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:04:30.491064  149985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:04:30.593557  149985 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:04:30.593700  149985 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:04:30.627510  149985 cri.go:89] found id: "ce286d7940d4cf8bcd1c288bb5b14221bd7bbb6b479f38c75025db52ca126ae3"
	I1025 09:04:30.627537  149985 cri.go:89] found id: "6acc989b2a2225d89c0139e5d01a7c3e722a17bacc777211a249505e0c98dfde"
	I1025 09:04:30.627543  149985 cri.go:89] found id: "3ef84406aa71455b6de1dd991735b55151760349fe2324174f32003ba3bab3a6"
	I1025 09:04:30.627546  149985 cri.go:89] found id: "bfbbb33612538e8ef5fcb258abbd95f420b5a5ac465ecbcf4d458c6bc6e2e38e"
	I1025 09:04:30.627549  149985 cri.go:89] found id: "30d14efd00c17dff3baa060c7f2eacaea9fee261ffff3a40817d920c70f7a1b1"
	I1025 09:04:30.627553  149985 cri.go:89] found id: "3c4dfd048ae14042cb2dd535dfd35f2830a4290f9ef179dd30ae8ebba1c31a9e"
	I1025 09:04:30.627566  149985 cri.go:89] found id: "7ed2f0ed5954858ab8b256dd7a28ee29951b8dc0b80a0b3be518e80869d79f4f"
	I1025 09:04:30.627569  149985 cri.go:89] found id: "9fc2a24b06ef7a84582e95e03fcb1a9f5fa59ca6e653388015de5bce16b2098b"
	I1025 09:04:30.627571  149985 cri.go:89] found id: "36e423e3e9d3f8607f12ae97290100bad7e6a20a2f191e6f20e0a9dbd1c955bd"
	I1025 09:04:30.627577  149985 cri.go:89] found id: "8f0ebcd8090442d43ac07f440e77c7fb785f836534fea4fbd3af7f5a9d5c92a3"
	I1025 09:04:30.627581  149985 cri.go:89] found id: "428c8023af396511adb70251f87e08e7a0348af7ea7b391566b9f6d720846eae"
	I1025 09:04:30.627585  149985 cri.go:89] found id: "63ac188b24d3aece814f7965aeb3fc8826585e8716e0d9712e6c24c67de79b2e"
	I1025 09:04:30.627589  149985 cri.go:89] found id: "d2cd04d0db0a96294bc519f8d661edb9555660536f71eaa38a63faa12c9ecd60"
	I1025 09:04:30.627593  149985 cri.go:89] found id: "99c81d2cbcf1373b9e986edd9cb06fe6e17af80281bd513e9c184715993690af"
	I1025 09:04:30.627597  149985 cri.go:89] found id: "9fe9c1838c296605cfd15a7d3a82dcf768d949c52d59c1d953a6e6031f8e6bb0"
	I1025 09:04:30.627604  149985 cri.go:89] found id: "a768f7fc3ff87846a8a1fc193f45e90d420c2384014e24853286ce24205e39e9"
	I1025 09:04:30.627608  149985 cri.go:89] found id: "5123be046b86f9088a95642cee7771736e4aa6d00228c4b52c9ce8fe6fc983d1"
	I1025 09:04:30.627614  149985 cri.go:89] found id: "0c53c0cc8c97408e395761582dcb19a6bd13bdb6fdb20adbe17e7425844245e6"
	I1025 09:04:30.627618  149985 cri.go:89] found id: "f6a1623c75ccd3731e08ba7c5cf4f2e2d4981b7012e2cb63e51a031c2d0839da"
	I1025 09:04:30.627622  149985 cri.go:89] found id: "856adda6d4a269f0840b32ee45117e16786dc583569513442f2836ffdeae8b23"
	I1025 09:04:30.627627  149985 cri.go:89] found id: "b61ce248f4c774901b5b79e3a742ad5afdba36e0d2fa91f7059ea628af2578fa"
	I1025 09:04:30.627630  149985 cri.go:89] found id: "d47c77a17465c61f43d01df2e570cf4f0920d4333585ba36bb3b062b0ad245b6"
	I1025 09:04:30.627634  149985 cri.go:89] found id: "8ce2136d4288fb4d8468a78bac8ea32ab90854d7bd4416ca9904da1040df01fa"
	I1025 09:04:30.627638  149985 cri.go:89] found id: "274bb680de1b51fcc087361608941e440ab97122abfb1cdd94dbb7ad5d9f4afa"
	I1025 09:04:30.627641  149985 cri.go:89] found id: "34b878e3a18d682bb517910ab586818dedf3985d76e5dfb859b8c455fef6342f"
	I1025 09:04:30.627643  149985 cri.go:89] found id: ""
	I1025 09:04:30.627685  149985 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:04:30.642600  149985 out.go:203] 
	W1025 09:04:30.643952  149985 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:04:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:04:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:04:30.643971  149985 out.go:285] * 
	* 
	W1025 09:04:30.646991  149985 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:04:30.648481  149985 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-273872 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-273872 addons disable ingress --alsologtostderr -v=1: exit status 11 (249.852675ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:04:30.710126  150045 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:04:30.710426  150045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:04:30.710439  150045 out.go:374] Setting ErrFile to fd 2...
	I1025 09:04:30.710443  150045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:04:30.710629  150045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:04:30.710900  150045 mustload.go:65] Loading cluster: addons-273872
	I1025 09:04:30.711258  150045 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:04:30.711273  150045 addons.go:606] checking whether the cluster is paused
	I1025 09:04:30.711372  150045 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:04:30.711393  150045 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:04:30.711815  150045 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:04:30.730056  150045 ssh_runner.go:195] Run: systemctl --version
	I1025 09:04:30.730126  150045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:04:30.749026  150045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:04:30.849124  150045 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:04:30.849202  150045 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:04:30.879006  150045 cri.go:89] found id: "ce286d7940d4cf8bcd1c288bb5b14221bd7bbb6b479f38c75025db52ca126ae3"
	I1025 09:04:30.879038  150045 cri.go:89] found id: "6acc989b2a2225d89c0139e5d01a7c3e722a17bacc777211a249505e0c98dfde"
	I1025 09:04:30.879042  150045 cri.go:89] found id: "3ef84406aa71455b6de1dd991735b55151760349fe2324174f32003ba3bab3a6"
	I1025 09:04:30.879045  150045 cri.go:89] found id: "bfbbb33612538e8ef5fcb258abbd95f420b5a5ac465ecbcf4d458c6bc6e2e38e"
	I1025 09:04:30.879048  150045 cri.go:89] found id: "30d14efd00c17dff3baa060c7f2eacaea9fee261ffff3a40817d920c70f7a1b1"
	I1025 09:04:30.879052  150045 cri.go:89] found id: "3c4dfd048ae14042cb2dd535dfd35f2830a4290f9ef179dd30ae8ebba1c31a9e"
	I1025 09:04:30.879056  150045 cri.go:89] found id: "7ed2f0ed5954858ab8b256dd7a28ee29951b8dc0b80a0b3be518e80869d79f4f"
	I1025 09:04:30.879060  150045 cri.go:89] found id: "9fc2a24b06ef7a84582e95e03fcb1a9f5fa59ca6e653388015de5bce16b2098b"
	I1025 09:04:30.879063  150045 cri.go:89] found id: "36e423e3e9d3f8607f12ae97290100bad7e6a20a2f191e6f20e0a9dbd1c955bd"
	I1025 09:04:30.879078  150045 cri.go:89] found id: "8f0ebcd8090442d43ac07f440e77c7fb785f836534fea4fbd3af7f5a9d5c92a3"
	I1025 09:04:30.879083  150045 cri.go:89] found id: "428c8023af396511adb70251f87e08e7a0348af7ea7b391566b9f6d720846eae"
	I1025 09:04:30.879087  150045 cri.go:89] found id: "63ac188b24d3aece814f7965aeb3fc8826585e8716e0d9712e6c24c67de79b2e"
	I1025 09:04:30.879092  150045 cri.go:89] found id: "d2cd04d0db0a96294bc519f8d661edb9555660536f71eaa38a63faa12c9ecd60"
	I1025 09:04:30.879096  150045 cri.go:89] found id: "99c81d2cbcf1373b9e986edd9cb06fe6e17af80281bd513e9c184715993690af"
	I1025 09:04:30.879101  150045 cri.go:89] found id: "9fe9c1838c296605cfd15a7d3a82dcf768d949c52d59c1d953a6e6031f8e6bb0"
	I1025 09:04:30.879110  150045 cri.go:89] found id: "a768f7fc3ff87846a8a1fc193f45e90d420c2384014e24853286ce24205e39e9"
	I1025 09:04:30.879115  150045 cri.go:89] found id: "5123be046b86f9088a95642cee7771736e4aa6d00228c4b52c9ce8fe6fc983d1"
	I1025 09:04:30.879119  150045 cri.go:89] found id: "0c53c0cc8c97408e395761582dcb19a6bd13bdb6fdb20adbe17e7425844245e6"
	I1025 09:04:30.879122  150045 cri.go:89] found id: "f6a1623c75ccd3731e08ba7c5cf4f2e2d4981b7012e2cb63e51a031c2d0839da"
	I1025 09:04:30.879125  150045 cri.go:89] found id: "856adda6d4a269f0840b32ee45117e16786dc583569513442f2836ffdeae8b23"
	I1025 09:04:30.879127  150045 cri.go:89] found id: "b61ce248f4c774901b5b79e3a742ad5afdba36e0d2fa91f7059ea628af2578fa"
	I1025 09:04:30.879130  150045 cri.go:89] found id: "d47c77a17465c61f43d01df2e570cf4f0920d4333585ba36bb3b062b0ad245b6"
	I1025 09:04:30.879132  150045 cri.go:89] found id: "8ce2136d4288fb4d8468a78bac8ea32ab90854d7bd4416ca9904da1040df01fa"
	I1025 09:04:30.879134  150045 cri.go:89] found id: "274bb680de1b51fcc087361608941e440ab97122abfb1cdd94dbb7ad5d9f4afa"
	I1025 09:04:30.879137  150045 cri.go:89] found id: "34b878e3a18d682bb517910ab586818dedf3985d76e5dfb859b8c455fef6342f"
	I1025 09:04:30.879140  150045 cri.go:89] found id: ""
	I1025 09:04:30.879204  150045 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:04:30.893660  150045 out.go:203] 
	W1025 09:04:30.894871  150045 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:04:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:04:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:04:30.894888  150045 out.go:285] * 
	* 
	W1025 09:04:30.897878  150045 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:04:30.899381  150045 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-273872 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (148.90s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-w9btk" [f6972bcf-6cf2-486f-9c7e-f0a53a0dddf0] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003057065s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-273872 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (245.361574ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:02:28.123884  147066 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:02:28.124173  147066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:28.124184  147066 out.go:374] Setting ErrFile to fd 2...
	I1025 09:02:28.124188  147066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:28.124393  147066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:02:28.124653  147066 mustload.go:65] Loading cluster: addons-273872
	I1025 09:02:28.124969  147066 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:28.124983  147066 addons.go:606] checking whether the cluster is paused
	I1025 09:02:28.125063  147066 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:28.125078  147066 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:02:28.125448  147066 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:02:28.142756  147066 ssh_runner.go:195] Run: systemctl --version
	I1025 09:02:28.142822  147066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:02:28.159534  147066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:02:28.258233  147066 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:02:28.258306  147066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:02:28.286573  147066 cri.go:89] found id: "6acc989b2a2225d89c0139e5d01a7c3e722a17bacc777211a249505e0c98dfde"
	I1025 09:02:28.286614  147066 cri.go:89] found id: "3ef84406aa71455b6de1dd991735b55151760349fe2324174f32003ba3bab3a6"
	I1025 09:02:28.286618  147066 cri.go:89] found id: "bfbbb33612538e8ef5fcb258abbd95f420b5a5ac465ecbcf4d458c6bc6e2e38e"
	I1025 09:02:28.286622  147066 cri.go:89] found id: "30d14efd00c17dff3baa060c7f2eacaea9fee261ffff3a40817d920c70f7a1b1"
	I1025 09:02:28.286625  147066 cri.go:89] found id: "3c4dfd048ae14042cb2dd535dfd35f2830a4290f9ef179dd30ae8ebba1c31a9e"
	I1025 09:02:28.286628  147066 cri.go:89] found id: "7ed2f0ed5954858ab8b256dd7a28ee29951b8dc0b80a0b3be518e80869d79f4f"
	I1025 09:02:28.286631  147066 cri.go:89] found id: "9fc2a24b06ef7a84582e95e03fcb1a9f5fa59ca6e653388015de5bce16b2098b"
	I1025 09:02:28.286633  147066 cri.go:89] found id: "36e423e3e9d3f8607f12ae97290100bad7e6a20a2f191e6f20e0a9dbd1c955bd"
	I1025 09:02:28.286635  147066 cri.go:89] found id: "8f0ebcd8090442d43ac07f440e77c7fb785f836534fea4fbd3af7f5a9d5c92a3"
	I1025 09:02:28.286647  147066 cri.go:89] found id: "428c8023af396511adb70251f87e08e7a0348af7ea7b391566b9f6d720846eae"
	I1025 09:02:28.286650  147066 cri.go:89] found id: "63ac188b24d3aece814f7965aeb3fc8826585e8716e0d9712e6c24c67de79b2e"
	I1025 09:02:28.286653  147066 cri.go:89] found id: "d2cd04d0db0a96294bc519f8d661edb9555660536f71eaa38a63faa12c9ecd60"
	I1025 09:02:28.286655  147066 cri.go:89] found id: "99c81d2cbcf1373b9e986edd9cb06fe6e17af80281bd513e9c184715993690af"
	I1025 09:02:28.286658  147066 cri.go:89] found id: "9fe9c1838c296605cfd15a7d3a82dcf768d949c52d59c1d953a6e6031f8e6bb0"
	I1025 09:02:28.286661  147066 cri.go:89] found id: "a768f7fc3ff87846a8a1fc193f45e90d420c2384014e24853286ce24205e39e9"
	I1025 09:02:28.286672  147066 cri.go:89] found id: "5123be046b86f9088a95642cee7771736e4aa6d00228c4b52c9ce8fe6fc983d1"
	I1025 09:02:28.286680  147066 cri.go:89] found id: "0c53c0cc8c97408e395761582dcb19a6bd13bdb6fdb20adbe17e7425844245e6"
	I1025 09:02:28.286684  147066 cri.go:89] found id: "f6a1623c75ccd3731e08ba7c5cf4f2e2d4981b7012e2cb63e51a031c2d0839da"
	I1025 09:02:28.286686  147066 cri.go:89] found id: "856adda6d4a269f0840b32ee45117e16786dc583569513442f2836ffdeae8b23"
	I1025 09:02:28.286689  147066 cri.go:89] found id: "b61ce248f4c774901b5b79e3a742ad5afdba36e0d2fa91f7059ea628af2578fa"
	I1025 09:02:28.286691  147066 cri.go:89] found id: "d47c77a17465c61f43d01df2e570cf4f0920d4333585ba36bb3b062b0ad245b6"
	I1025 09:02:28.286693  147066 cri.go:89] found id: "8ce2136d4288fb4d8468a78bac8ea32ab90854d7bd4416ca9904da1040df01fa"
	I1025 09:02:28.286696  147066 cri.go:89] found id: "274bb680de1b51fcc087361608941e440ab97122abfb1cdd94dbb7ad5d9f4afa"
	I1025 09:02:28.286698  147066 cri.go:89] found id: "34b878e3a18d682bb517910ab586818dedf3985d76e5dfb859b8c455fef6342f"
	I1025 09:02:28.286700  147066 cri.go:89] found id: ""
	I1025 09:02:28.286750  147066 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:02:28.300683  147066 out.go:203] 
	W1025 09:02:28.301811  147066 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:02:28.301844  147066 out.go:285] * 
	* 
	W1025 09:02:28.304770  147066 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:02:28.305911  147066 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-273872 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.184299ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-jm2zb" [bd49c1cd-fde4-48b8-9120-c799d302450e] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003458782s
addons_test.go:463: (dbg) Run:  kubectl --context addons-273872 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-273872 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (240.266088ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:02:07.122030  145674 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:02:07.122316  145674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:07.122326  145674 out.go:374] Setting ErrFile to fd 2...
	I1025 09:02:07.122331  145674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:07.122523  145674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:02:07.122767  145674 mustload.go:65] Loading cluster: addons-273872
	I1025 09:02:07.123094  145674 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:07.123109  145674 addons.go:606] checking whether the cluster is paused
	I1025 09:02:07.123188  145674 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:07.123203  145674 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:02:07.123598  145674 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:02:07.140774  145674 ssh_runner.go:195] Run: systemctl --version
	I1025 09:02:07.140823  145674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:02:07.158705  145674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:02:07.256974  145674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:02:07.257072  145674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:02:07.285568  145674 cri.go:89] found id: "6acc989b2a2225d89c0139e5d01a7c3e722a17bacc777211a249505e0c98dfde"
	I1025 09:02:07.285597  145674 cri.go:89] found id: "3ef84406aa71455b6de1dd991735b55151760349fe2324174f32003ba3bab3a6"
	I1025 09:02:07.285603  145674 cri.go:89] found id: "bfbbb33612538e8ef5fcb258abbd95f420b5a5ac465ecbcf4d458c6bc6e2e38e"
	I1025 09:02:07.285608  145674 cri.go:89] found id: "30d14efd00c17dff3baa060c7f2eacaea9fee261ffff3a40817d920c70f7a1b1"
	I1025 09:02:07.285611  145674 cri.go:89] found id: "3c4dfd048ae14042cb2dd535dfd35f2830a4290f9ef179dd30ae8ebba1c31a9e"
	I1025 09:02:07.285616  145674 cri.go:89] found id: "7ed2f0ed5954858ab8b256dd7a28ee29951b8dc0b80a0b3be518e80869d79f4f"
	I1025 09:02:07.285620  145674 cri.go:89] found id: "9fc2a24b06ef7a84582e95e03fcb1a9f5fa59ca6e653388015de5bce16b2098b"
	I1025 09:02:07.285624  145674 cri.go:89] found id: "36e423e3e9d3f8607f12ae97290100bad7e6a20a2f191e6f20e0a9dbd1c955bd"
	I1025 09:02:07.285628  145674 cri.go:89] found id: "8f0ebcd8090442d43ac07f440e77c7fb785f836534fea4fbd3af7f5a9d5c92a3"
	I1025 09:02:07.285640  145674 cri.go:89] found id: "428c8023af396511adb70251f87e08e7a0348af7ea7b391566b9f6d720846eae"
	I1025 09:02:07.285645  145674 cri.go:89] found id: "63ac188b24d3aece814f7965aeb3fc8826585e8716e0d9712e6c24c67de79b2e"
	I1025 09:02:07.285652  145674 cri.go:89] found id: "d2cd04d0db0a96294bc519f8d661edb9555660536f71eaa38a63faa12c9ecd60"
	I1025 09:02:07.285661  145674 cri.go:89] found id: "99c81d2cbcf1373b9e986edd9cb06fe6e17af80281bd513e9c184715993690af"
	I1025 09:02:07.285665  145674 cri.go:89] found id: "9fe9c1838c296605cfd15a7d3a82dcf768d949c52d59c1d953a6e6031f8e6bb0"
	I1025 09:02:07.285669  145674 cri.go:89] found id: "a768f7fc3ff87846a8a1fc193f45e90d420c2384014e24853286ce24205e39e9"
	I1025 09:02:07.285689  145674 cri.go:89] found id: "5123be046b86f9088a95642cee7771736e4aa6d00228c4b52c9ce8fe6fc983d1"
	I1025 09:02:07.285698  145674 cri.go:89] found id: "0c53c0cc8c97408e395761582dcb19a6bd13bdb6fdb20adbe17e7425844245e6"
	I1025 09:02:07.285703  145674 cri.go:89] found id: "f6a1623c75ccd3731e08ba7c5cf4f2e2d4981b7012e2cb63e51a031c2d0839da"
	I1025 09:02:07.285706  145674 cri.go:89] found id: "856adda6d4a269f0840b32ee45117e16786dc583569513442f2836ffdeae8b23"
	I1025 09:02:07.285708  145674 cri.go:89] found id: "b61ce248f4c774901b5b79e3a742ad5afdba36e0d2fa91f7059ea628af2578fa"
	I1025 09:02:07.285711  145674 cri.go:89] found id: "d47c77a17465c61f43d01df2e570cf4f0920d4333585ba36bb3b062b0ad245b6"
	I1025 09:02:07.285713  145674 cri.go:89] found id: "8ce2136d4288fb4d8468a78bac8ea32ab90854d7bd4416ca9904da1040df01fa"
	I1025 09:02:07.285715  145674 cri.go:89] found id: "274bb680de1b51fcc087361608941e440ab97122abfb1cdd94dbb7ad5d9f4afa"
	I1025 09:02:07.285718  145674 cri.go:89] found id: "34b878e3a18d682bb517910ab586818dedf3985d76e5dfb859b8c455fef6342f"
	I1025 09:02:07.285720  145674 cri.go:89] found id: ""
	I1025 09:02:07.285762  145674 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:02:07.299556  145674 out.go:203] 
	W1025 09:02:07.300686  145674 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:02:07.300704  145674 out.go:285] * 
	* 
	W1025 09:02:07.303598  145674 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:02:07.304739  145674 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-273872 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (33.23s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1025 09:02:04.730089  134145 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1025 09:02:04.733536  134145 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1025 09:02:04.733566  134145 kapi.go:107] duration metric: took 3.482214ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.492242ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-273872 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-273872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-273872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-273872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-273872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-273872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-273872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-273872 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-273872 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-273872 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [850d8aff-355b-4264-8b1a-c1de4b309dff] Pending
helpers_test.go:352: "task-pv-pod" [850d8aff-355b-4264-8b1a-c1de4b309dff] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [850d8aff-355b-4264-8b1a-c1de4b309dff] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.004220312s
addons_test.go:572: (dbg) Run:  kubectl --context addons-273872 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-273872 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-273872 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-273872 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-273872 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-273872 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-273872 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-273872 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-273872 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-273872 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-273872 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-273872 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [435e028f-5218-438f-a41b-90373e744241] Pending
helpers_test.go:352: "task-pv-pod-restore" [435e028f-5218-438f-a41b-90373e744241] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [435e028f-5218-438f-a41b-90373e744241] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003810712s
addons_test.go:614: (dbg) Run:  kubectl --context addons-273872 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-273872 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-273872 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-273872 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (240.972429ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:02:37.532456  147725 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:02:37.532754  147725 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:37.532764  147725 out.go:374] Setting ErrFile to fd 2...
	I1025 09:02:37.532767  147725 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:37.532950  147725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:02:37.533199  147725 mustload.go:65] Loading cluster: addons-273872
	I1025 09:02:37.533547  147725 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:37.533564  147725 addons.go:606] checking whether the cluster is paused
	I1025 09:02:37.533642  147725 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:37.533665  147725 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:02:37.534024  147725 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:02:37.551825  147725 ssh_runner.go:195] Run: systemctl --version
	I1025 09:02:37.551884  147725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:02:37.569823  147725 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:02:37.667666  147725 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:02:37.667739  147725 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:02:37.696029  147725 cri.go:89] found id: "6acc989b2a2225d89c0139e5d01a7c3e722a17bacc777211a249505e0c98dfde"
	I1025 09:02:37.696048  147725 cri.go:89] found id: "3ef84406aa71455b6de1dd991735b55151760349fe2324174f32003ba3bab3a6"
	I1025 09:02:37.696052  147725 cri.go:89] found id: "bfbbb33612538e8ef5fcb258abbd95f420b5a5ac465ecbcf4d458c6bc6e2e38e"
	I1025 09:02:37.696055  147725 cri.go:89] found id: "30d14efd00c17dff3baa060c7f2eacaea9fee261ffff3a40817d920c70f7a1b1"
	I1025 09:02:37.696057  147725 cri.go:89] found id: "3c4dfd048ae14042cb2dd535dfd35f2830a4290f9ef179dd30ae8ebba1c31a9e"
	I1025 09:02:37.696060  147725 cri.go:89] found id: "7ed2f0ed5954858ab8b256dd7a28ee29951b8dc0b80a0b3be518e80869d79f4f"
	I1025 09:02:37.696063  147725 cri.go:89] found id: "9fc2a24b06ef7a84582e95e03fcb1a9f5fa59ca6e653388015de5bce16b2098b"
	I1025 09:02:37.696065  147725 cri.go:89] found id: "36e423e3e9d3f8607f12ae97290100bad7e6a20a2f191e6f20e0a9dbd1c955bd"
	I1025 09:02:37.696067  147725 cri.go:89] found id: "8f0ebcd8090442d43ac07f440e77c7fb785f836534fea4fbd3af7f5a9d5c92a3"
	I1025 09:02:37.696078  147725 cri.go:89] found id: "428c8023af396511adb70251f87e08e7a0348af7ea7b391566b9f6d720846eae"
	I1025 09:02:37.696081  147725 cri.go:89] found id: "63ac188b24d3aece814f7965aeb3fc8826585e8716e0d9712e6c24c67de79b2e"
	I1025 09:02:37.696084  147725 cri.go:89] found id: "d2cd04d0db0a96294bc519f8d661edb9555660536f71eaa38a63faa12c9ecd60"
	I1025 09:02:37.696086  147725 cri.go:89] found id: "99c81d2cbcf1373b9e986edd9cb06fe6e17af80281bd513e9c184715993690af"
	I1025 09:02:37.696088  147725 cri.go:89] found id: "9fe9c1838c296605cfd15a7d3a82dcf768d949c52d59c1d953a6e6031f8e6bb0"
	I1025 09:02:37.696091  147725 cri.go:89] found id: "a768f7fc3ff87846a8a1fc193f45e90d420c2384014e24853286ce24205e39e9"
	I1025 09:02:37.696095  147725 cri.go:89] found id: "5123be046b86f9088a95642cee7771736e4aa6d00228c4b52c9ce8fe6fc983d1"
	I1025 09:02:37.696098  147725 cri.go:89] found id: "0c53c0cc8c97408e395761582dcb19a6bd13bdb6fdb20adbe17e7425844245e6"
	I1025 09:02:37.696101  147725 cri.go:89] found id: "f6a1623c75ccd3731e08ba7c5cf4f2e2d4981b7012e2cb63e51a031c2d0839da"
	I1025 09:02:37.696103  147725 cri.go:89] found id: "856adda6d4a269f0840b32ee45117e16786dc583569513442f2836ffdeae8b23"
	I1025 09:02:37.696106  147725 cri.go:89] found id: "b61ce248f4c774901b5b79e3a742ad5afdba36e0d2fa91f7059ea628af2578fa"
	I1025 09:02:37.696108  147725 cri.go:89] found id: "d47c77a17465c61f43d01df2e570cf4f0920d4333585ba36bb3b062b0ad245b6"
	I1025 09:02:37.696111  147725 cri.go:89] found id: "8ce2136d4288fb4d8468a78bac8ea32ab90854d7bd4416ca9904da1040df01fa"
	I1025 09:02:37.696113  147725 cri.go:89] found id: "274bb680de1b51fcc087361608941e440ab97122abfb1cdd94dbb7ad5d9f4afa"
	I1025 09:02:37.696116  147725 cri.go:89] found id: "34b878e3a18d682bb517910ab586818dedf3985d76e5dfb859b8c455fef6342f"
	I1025 09:02:37.696119  147725 cri.go:89] found id: ""
	I1025 09:02:37.696153  147725 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:02:37.709551  147725 out.go:203] 
	W1025 09:02:37.710756  147725 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:02:37.710775  147725 out.go:285] * 
	* 
	W1025 09:02:37.713739  147725 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:02:37.715017  147725 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-273872 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-273872 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (239.961828ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:02:37.773191  147788 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:02:37.773484  147788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:37.773494  147788 out.go:374] Setting ErrFile to fd 2...
	I1025 09:02:37.773498  147788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:37.773677  147788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:02:37.773932  147788 mustload.go:65] Loading cluster: addons-273872
	I1025 09:02:37.774240  147788 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:37.774254  147788 addons.go:606] checking whether the cluster is paused
	I1025 09:02:37.774331  147788 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:37.774342  147788 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:02:37.774760  147788 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:02:37.792213  147788 ssh_runner.go:195] Run: systemctl --version
	I1025 09:02:37.792276  147788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:02:37.809396  147788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:02:37.906734  147788 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:02:37.906810  147788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:02:37.936287  147788 cri.go:89] found id: "6acc989b2a2225d89c0139e5d01a7c3e722a17bacc777211a249505e0c98dfde"
	I1025 09:02:37.936308  147788 cri.go:89] found id: "3ef84406aa71455b6de1dd991735b55151760349fe2324174f32003ba3bab3a6"
	I1025 09:02:37.936313  147788 cri.go:89] found id: "bfbbb33612538e8ef5fcb258abbd95f420b5a5ac465ecbcf4d458c6bc6e2e38e"
	I1025 09:02:37.936318  147788 cri.go:89] found id: "30d14efd00c17dff3baa060c7f2eacaea9fee261ffff3a40817d920c70f7a1b1"
	I1025 09:02:37.936322  147788 cri.go:89] found id: "3c4dfd048ae14042cb2dd535dfd35f2830a4290f9ef179dd30ae8ebba1c31a9e"
	I1025 09:02:37.936328  147788 cri.go:89] found id: "7ed2f0ed5954858ab8b256dd7a28ee29951b8dc0b80a0b3be518e80869d79f4f"
	I1025 09:02:37.936333  147788 cri.go:89] found id: "9fc2a24b06ef7a84582e95e03fcb1a9f5fa59ca6e653388015de5bce16b2098b"
	I1025 09:02:37.936337  147788 cri.go:89] found id: "36e423e3e9d3f8607f12ae97290100bad7e6a20a2f191e6f20e0a9dbd1c955bd"
	I1025 09:02:37.936340  147788 cri.go:89] found id: "8f0ebcd8090442d43ac07f440e77c7fb785f836534fea4fbd3af7f5a9d5c92a3"
	I1025 09:02:37.936366  147788 cri.go:89] found id: "428c8023af396511adb70251f87e08e7a0348af7ea7b391566b9f6d720846eae"
	I1025 09:02:37.936371  147788 cri.go:89] found id: "63ac188b24d3aece814f7965aeb3fc8826585e8716e0d9712e6c24c67de79b2e"
	I1025 09:02:37.936375  147788 cri.go:89] found id: "d2cd04d0db0a96294bc519f8d661edb9555660536f71eaa38a63faa12c9ecd60"
	I1025 09:02:37.936379  147788 cri.go:89] found id: "99c81d2cbcf1373b9e986edd9cb06fe6e17af80281bd513e9c184715993690af"
	I1025 09:02:37.936389  147788 cri.go:89] found id: "9fe9c1838c296605cfd15a7d3a82dcf768d949c52d59c1d953a6e6031f8e6bb0"
	I1025 09:02:37.936393  147788 cri.go:89] found id: "a768f7fc3ff87846a8a1fc193f45e90d420c2384014e24853286ce24205e39e9"
	I1025 09:02:37.936400  147788 cri.go:89] found id: "5123be046b86f9088a95642cee7771736e4aa6d00228c4b52c9ce8fe6fc983d1"
	I1025 09:02:37.936403  147788 cri.go:89] found id: "0c53c0cc8c97408e395761582dcb19a6bd13bdb6fdb20adbe17e7425844245e6"
	I1025 09:02:37.936407  147788 cri.go:89] found id: "f6a1623c75ccd3731e08ba7c5cf4f2e2d4981b7012e2cb63e51a031c2d0839da"
	I1025 09:02:37.936409  147788 cri.go:89] found id: "856adda6d4a269f0840b32ee45117e16786dc583569513442f2836ffdeae8b23"
	I1025 09:02:37.936412  147788 cri.go:89] found id: "b61ce248f4c774901b5b79e3a742ad5afdba36e0d2fa91f7059ea628af2578fa"
	I1025 09:02:37.936414  147788 cri.go:89] found id: "d47c77a17465c61f43d01df2e570cf4f0920d4333585ba36bb3b062b0ad245b6"
	I1025 09:02:37.936416  147788 cri.go:89] found id: "8ce2136d4288fb4d8468a78bac8ea32ab90854d7bd4416ca9904da1040df01fa"
	I1025 09:02:37.936419  147788 cri.go:89] found id: "274bb680de1b51fcc087361608941e440ab97122abfb1cdd94dbb7ad5d9f4afa"
	I1025 09:02:37.936421  147788 cri.go:89] found id: "34b878e3a18d682bb517910ab586818dedf3985d76e5dfb859b8c455fef6342f"
	I1025 09:02:37.936424  147788 cri.go:89] found id: ""
	I1025 09:02:37.936470  147788 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:02:37.950196  147788 out.go:203] 
	W1025 09:02:37.951333  147788 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:02:37.951372  147788 out.go:285] * 
	* 
	W1025 09:02:37.954668  147788 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:02:37.955766  147788 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-273872 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (33.23s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-273872 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-273872 --alsologtostderr -v=1: exit status 11 (270.797207ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:02:02.065376  144033 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:02:02.065685  144033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:02.065696  144033 out.go:374] Setting ErrFile to fd 2...
	I1025 09:02:02.065702  144033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:02.065958  144033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:02:02.066251  144033 mustload.go:65] Loading cluster: addons-273872
	I1025 09:02:02.066718  144033 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:02.066743  144033 addons.go:606] checking whether the cluster is paused
	I1025 09:02:02.066865  144033 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:02.066898  144033 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:02:02.067482  144033 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:02:02.085425  144033 ssh_runner.go:195] Run: systemctl --version
	I1025 09:02:02.085491  144033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:02:02.102633  144033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:02:02.201613  144033 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:02:02.201737  144033 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:02:02.233186  144033 cri.go:89] found id: "6acc989b2a2225d89c0139e5d01a7c3e722a17bacc777211a249505e0c98dfde"
	I1025 09:02:02.233207  144033 cri.go:89] found id: "3ef84406aa71455b6de1dd991735b55151760349fe2324174f32003ba3bab3a6"
	I1025 09:02:02.233211  144033 cri.go:89] found id: "bfbbb33612538e8ef5fcb258abbd95f420b5a5ac465ecbcf4d458c6bc6e2e38e"
	I1025 09:02:02.233214  144033 cri.go:89] found id: "30d14efd00c17dff3baa060c7f2eacaea9fee261ffff3a40817d920c70f7a1b1"
	I1025 09:02:02.233217  144033 cri.go:89] found id: "3c4dfd048ae14042cb2dd535dfd35f2830a4290f9ef179dd30ae8ebba1c31a9e"
	I1025 09:02:02.233221  144033 cri.go:89] found id: "7ed2f0ed5954858ab8b256dd7a28ee29951b8dc0b80a0b3be518e80869d79f4f"
	I1025 09:02:02.233224  144033 cri.go:89] found id: "9fc2a24b06ef7a84582e95e03fcb1a9f5fa59ca6e653388015de5bce16b2098b"
	I1025 09:02:02.233226  144033 cri.go:89] found id: "36e423e3e9d3f8607f12ae97290100bad7e6a20a2f191e6f20e0a9dbd1c955bd"
	I1025 09:02:02.233229  144033 cri.go:89] found id: "8f0ebcd8090442d43ac07f440e77c7fb785f836534fea4fbd3af7f5a9d5c92a3"
	I1025 09:02:02.233239  144033 cri.go:89] found id: "428c8023af396511adb70251f87e08e7a0348af7ea7b391566b9f6d720846eae"
	I1025 09:02:02.233242  144033 cri.go:89] found id: "63ac188b24d3aece814f7965aeb3fc8826585e8716e0d9712e6c24c67de79b2e"
	I1025 09:02:02.233244  144033 cri.go:89] found id: "d2cd04d0db0a96294bc519f8d661edb9555660536f71eaa38a63faa12c9ecd60"
	I1025 09:02:02.233247  144033 cri.go:89] found id: "99c81d2cbcf1373b9e986edd9cb06fe6e17af80281bd513e9c184715993690af"
	I1025 09:02:02.233250  144033 cri.go:89] found id: "9fe9c1838c296605cfd15a7d3a82dcf768d949c52d59c1d953a6e6031f8e6bb0"
	I1025 09:02:02.233253  144033 cri.go:89] found id: "a768f7fc3ff87846a8a1fc193f45e90d420c2384014e24853286ce24205e39e9"
	I1025 09:02:02.233257  144033 cri.go:89] found id: "5123be046b86f9088a95642cee7771736e4aa6d00228c4b52c9ce8fe6fc983d1"
	I1025 09:02:02.233259  144033 cri.go:89] found id: "0c53c0cc8c97408e395761582dcb19a6bd13bdb6fdb20adbe17e7425844245e6"
	I1025 09:02:02.233263  144033 cri.go:89] found id: "f6a1623c75ccd3731e08ba7c5cf4f2e2d4981b7012e2cb63e51a031c2d0839da"
	I1025 09:02:02.233266  144033 cri.go:89] found id: "856adda6d4a269f0840b32ee45117e16786dc583569513442f2836ffdeae8b23"
	I1025 09:02:02.233268  144033 cri.go:89] found id: "b61ce248f4c774901b5b79e3a742ad5afdba36e0d2fa91f7059ea628af2578fa"
	I1025 09:02:02.233270  144033 cri.go:89] found id: "d47c77a17465c61f43d01df2e570cf4f0920d4333585ba36bb3b062b0ad245b6"
	I1025 09:02:02.233273  144033 cri.go:89] found id: "8ce2136d4288fb4d8468a78bac8ea32ab90854d7bd4416ca9904da1040df01fa"
	I1025 09:02:02.233277  144033 cri.go:89] found id: "274bb680de1b51fcc087361608941e440ab97122abfb1cdd94dbb7ad5d9f4afa"
	I1025 09:02:02.233289  144033 cri.go:89] found id: "34b878e3a18d682bb517910ab586818dedf3985d76e5dfb859b8c455fef6342f"
	I1025 09:02:02.233292  144033 cri.go:89] found id: ""
	I1025 09:02:02.233335  144033 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:02:02.248438  144033 out.go:203] 
	W1025 09:02:02.253578  144033 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:02:02.253604  144033 out.go:285] * 
	* 
	W1025 09:02:02.260290  144033 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:02:02.263886  144033 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-273872 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-273872
helpers_test.go:243: (dbg) docker inspect addons-273872:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "26302ced5c293b1e6e8945c9f16946f94345db7d6daaf0a087b444613dce64df",
	        "Created": "2025-10-25T08:59:46.86753105Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 136174,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T08:59:46.902553266Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/26302ced5c293b1e6e8945c9f16946f94345db7d6daaf0a087b444613dce64df/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/26302ced5c293b1e6e8945c9f16946f94345db7d6daaf0a087b444613dce64df/hostname",
	        "HostsPath": "/var/lib/docker/containers/26302ced5c293b1e6e8945c9f16946f94345db7d6daaf0a087b444613dce64df/hosts",
	        "LogPath": "/var/lib/docker/containers/26302ced5c293b1e6e8945c9f16946f94345db7d6daaf0a087b444613dce64df/26302ced5c293b1e6e8945c9f16946f94345db7d6daaf0a087b444613dce64df-json.log",
	        "Name": "/addons-273872",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-273872:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-273872",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "26302ced5c293b1e6e8945c9f16946f94345db7d6daaf0a087b444613dce64df",
	                "LowerDir": "/var/lib/docker/overlay2/119d712737ccc9bc344f9d5ff06514fcf5f7ad2fb3991c70e0f1d9bfcb4c9a0d-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/119d712737ccc9bc344f9d5ff06514fcf5f7ad2fb3991c70e0f1d9bfcb4c9a0d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/119d712737ccc9bc344f9d5ff06514fcf5f7ad2fb3991c70e0f1d9bfcb4c9a0d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/119d712737ccc9bc344f9d5ff06514fcf5f7ad2fb3991c70e0f1d9bfcb4c9a0d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-273872",
	                "Source": "/var/lib/docker/volumes/addons-273872/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-273872",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-273872",
	                "name.minikube.sigs.k8s.io": "addons-273872",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8ae4f05606e055f220ae7ac42e548d8100e25c1b392de0467d91de0c72612a6b",
	            "SandboxKey": "/var/run/docker/netns/8ae4f05606e0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-273872": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:31:df:ea:fa:ac",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4be60c27eb739040f4d436760938699c48376c5ddf25f116556dbcb7845d0f03",
	                    "EndpointID": "2e0b304e85a000b356abfbc10160fa4efdfcdf5bc06d9b15a67f6b0378c74ee2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-273872",
	                        "26302ced5c29"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-273872 -n addons-273872
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-273872 logs -n 25: (1.132945497s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-873386 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-873386   │ jenkins │ v1.37.0 │ 25 Oct 25 08:58 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ delete  │ -p download-only-873386                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-873386   │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ start   │ -o=json --download-only -p download-only-624042 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-624042   │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ delete  │ -p download-only-624042                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-624042   │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ delete  │ -p download-only-873386                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-873386   │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ delete  │ -p download-only-624042                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-624042   │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ start   │ --download-only -p download-docker-376173 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-376173 │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │                     │
	│ delete  │ -p download-docker-376173                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-376173 │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ start   │ --download-only -p binary-mirror-059821 --alsologtostderr --binary-mirror http://127.0.0.1:34279 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-059821   │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │                     │
	│ delete  │ -p binary-mirror-059821                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-059821   │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ addons  │ disable dashboard -p addons-273872                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-273872          │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │                     │
	│ addons  │ enable dashboard -p addons-273872                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-273872          │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │                     │
	│ start   │ -p addons-273872 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-273872          │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 09:01 UTC │
	│ addons  │ addons-273872 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-273872          │ jenkins │ v1.37.0 │ 25 Oct 25 09:01 UTC │                     │
	│ addons  │ addons-273872 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-273872          │ jenkins │ v1.37.0 │ 25 Oct 25 09:02 UTC │                     │
	│ addons  │ enable headlamp -p addons-273872 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-273872          │ jenkins │ v1.37.0 │ 25 Oct 25 09:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 08:59:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 08:59:25.302653  135520 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:59:25.302788  135520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:59:25.302800  135520 out.go:374] Setting ErrFile to fd 2...
	I1025 08:59:25.302824  135520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:59:25.303013  135520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 08:59:25.303518  135520 out.go:368] Setting JSON to false
	I1025 08:59:25.304503  135520 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2509,"bootTime":1761380256,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 08:59:25.304587  135520 start.go:141] virtualization: kvm guest
	I1025 08:59:25.306318  135520 out.go:179] * [addons-273872] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 08:59:25.307787  135520 notify.go:220] Checking for updates...
	I1025 08:59:25.307854  135520 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 08:59:25.309129  135520 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:59:25.310368  135520 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 08:59:25.311831  135520 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 08:59:25.313142  135520 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 08:59:25.314279  135520 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 08:59:25.315685  135520 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:59:25.339051  135520 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 08:59:25.339126  135520 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:59:25.399606  135520 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-25 08:59:25.389797236 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:59:25.399718  135520 docker.go:318] overlay module found
	I1025 08:59:25.401302  135520 out.go:179] * Using the docker driver based on user configuration
	I1025 08:59:25.402467  135520 start.go:305] selected driver: docker
	I1025 08:59:25.402486  135520 start.go:925] validating driver "docker" against <nil>
	I1025 08:59:25.402499  135520 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 08:59:25.403076  135520 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:59:25.456120  135520 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-25 08:59:25.447335937 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:59:25.456288  135520 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 08:59:25.456593  135520 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 08:59:25.458214  135520 out.go:179] * Using Docker driver with root privileges
	I1025 08:59:25.459325  135520 cni.go:84] Creating CNI manager for ""
	I1025 08:59:25.459416  135520 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:59:25.459430  135520 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 08:59:25.459492  135520 start.go:349] cluster config:
	{Name:addons-273872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-273872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1025 08:59:25.460580  135520 out.go:179] * Starting "addons-273872" primary control-plane node in "addons-273872" cluster
	I1025 08:59:25.461533  135520 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 08:59:25.462687  135520 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 08:59:25.463672  135520 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:59:25.463706  135520 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 08:59:25.463715  135520 cache.go:58] Caching tarball of preloaded images
	I1025 08:59:25.463765  135520 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 08:59:25.463831  135520 preload.go:233] Found /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 08:59:25.463843  135520 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 08:59:25.464183  135520 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/config.json ...
	I1025 08:59:25.464209  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/config.json: {Name:mk02f39a836faf29cc021b57d97f958117e83fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:25.480121  135520 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 08:59:25.480251  135520 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 08:59:25.480272  135520 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1025 08:59:25.480277  135520 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1025 08:59:25.480290  135520 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1025 08:59:25.480300  135520 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1025 08:59:38.598302  135520 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1025 08:59:38.598355  135520 cache.go:232] Successfully downloaded all kic artifacts
	I1025 08:59:38.598431  135520 start.go:360] acquireMachinesLock for addons-273872: {Name:mk21cf68fc8ee12ca2f54ce31eed973609b4be09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 08:59:38.598576  135520 start.go:364] duration metric: took 117.791µs to acquireMachinesLock for "addons-273872"
	I1025 08:59:38.598612  135520 start.go:93] Provisioning new machine with config: &{Name:addons-273872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-273872 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 08:59:38.598690  135520 start.go:125] createHost starting for "" (driver="docker")
	I1025 08:59:38.600214  135520 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1025 08:59:38.600459  135520 start.go:159] libmachine.API.Create for "addons-273872" (driver="docker")
	I1025 08:59:38.600495  135520 client.go:168] LocalClient.Create starting
	I1025 08:59:38.600643  135520 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem
	I1025 08:59:38.729429  135520 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem
	I1025 08:59:38.943908  135520 cli_runner.go:164] Run: docker network inspect addons-273872 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 08:59:38.961138  135520 cli_runner.go:211] docker network inspect addons-273872 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 08:59:38.961221  135520 network_create.go:284] running [docker network inspect addons-273872] to gather additional debugging logs...
	I1025 08:59:38.961241  135520 cli_runner.go:164] Run: docker network inspect addons-273872
	W1025 08:59:38.977933  135520 cli_runner.go:211] docker network inspect addons-273872 returned with exit code 1
	I1025 08:59:38.977964  135520 network_create.go:287] error running [docker network inspect addons-273872]: docker network inspect addons-273872: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-273872 not found
	I1025 08:59:38.977976  135520 network_create.go:289] output of [docker network inspect addons-273872]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-273872 not found
	
	** /stderr **
	I1025 08:59:38.978091  135520 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 08:59:38.995149  135520 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f086d0}
	I1025 08:59:38.995188  135520 network_create.go:124] attempt to create docker network addons-273872 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 08:59:38.995245  135520 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-273872 addons-273872
	I1025 08:59:39.050339  135520 network_create.go:108] docker network addons-273872 192.168.49.0/24 created
	I1025 08:59:39.050384  135520 kic.go:121] calculated static IP "192.168.49.2" for the "addons-273872" container
	I1025 08:59:39.050453  135520 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 08:59:39.066286  135520 cli_runner.go:164] Run: docker volume create addons-273872 --label name.minikube.sigs.k8s.io=addons-273872 --label created_by.minikube.sigs.k8s.io=true
	I1025 08:59:39.083734  135520 oci.go:103] Successfully created a docker volume addons-273872
	I1025 08:59:39.083818  135520 cli_runner.go:164] Run: docker run --rm --name addons-273872-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-273872 --entrypoint /usr/bin/test -v addons-273872:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 08:59:42.513383  135520 cli_runner.go:217] Completed: docker run --rm --name addons-273872-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-273872 --entrypoint /usr/bin/test -v addons-273872:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (3.429486034s)
	I1025 08:59:42.513419  135520 oci.go:107] Successfully prepared a docker volume addons-273872
	I1025 08:59:42.513454  135520 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:59:42.513479  135520 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 08:59:42.513540  135520 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-273872:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 08:59:46.796684  135520 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-273872:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.283102394s)
	I1025 08:59:46.796725  135520 kic.go:203] duration metric: took 4.28324175s to extract preloaded images to volume ...
	W1025 08:59:46.796825  135520 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 08:59:46.796858  135520 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 08:59:46.796895  135520 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 08:59:46.852313  135520 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-273872 --name addons-273872 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-273872 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-273872 --network addons-273872 --ip 192.168.49.2 --volume addons-273872:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 08:59:47.113238  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Running}}
	I1025 08:59:47.132883  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 08:59:47.152554  135520 cli_runner.go:164] Run: docker exec addons-273872 stat /var/lib/dpkg/alternatives/iptables
	I1025 08:59:47.200606  135520 oci.go:144] the created container "addons-273872" has a running status.
	I1025 08:59:47.200644  135520 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa...
	I1025 08:59:47.454871  135520 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 08:59:47.481500  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 08:59:47.502750  135520 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 08:59:47.502769  135520 kic_runner.go:114] Args: [docker exec --privileged addons-273872 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 08:59:47.544130  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 08:59:47.562487  135520 machine.go:93] provisionDockerMachine start ...
	I1025 08:59:47.562589  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 08:59:47.579957  135520 main.go:141] libmachine: Using SSH client type: native
	I1025 08:59:47.580258  135520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1025 08:59:47.580277  135520 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 08:59:47.720071  135520 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-273872
	
	I1025 08:59:47.720097  135520 ubuntu.go:182] provisioning hostname "addons-273872"
	I1025 08:59:47.720148  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 08:59:47.738995  135520 main.go:141] libmachine: Using SSH client type: native
	I1025 08:59:47.739200  135520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1025 08:59:47.739215  135520 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-273872 && echo "addons-273872" | sudo tee /etc/hostname
	I1025 08:59:47.890492  135520 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-273872
	
	I1025 08:59:47.890628  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 08:59:47.908163  135520 main.go:141] libmachine: Using SSH client type: native
	I1025 08:59:47.908401  135520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1025 08:59:47.908420  135520 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-273872' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-273872/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-273872' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 08:59:48.046882  135520 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 08:59:48.046916  135520 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 08:59:48.046940  135520 ubuntu.go:190] setting up certificates
	I1025 08:59:48.046953  135520 provision.go:84] configureAuth start
	I1025 08:59:48.047008  135520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-273872
	I1025 08:59:48.064696  135520 provision.go:143] copyHostCerts
	I1025 08:59:48.064776  135520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 08:59:48.064890  135520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 08:59:48.064963  135520 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 08:59:48.065017  135520 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.addons-273872 san=[127.0.0.1 192.168.49.2 addons-273872 localhost minikube]
	I1025 08:59:48.465921  135520 provision.go:177] copyRemoteCerts
	I1025 08:59:48.465980  135520 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 08:59:48.466014  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 08:59:48.483118  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 08:59:48.581538  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 08:59:48.600132  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 08:59:48.616687  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 08:59:48.633369  135520 provision.go:87] duration metric: took 586.385193ms to configureAuth
	I1025 08:59:48.633400  135520 ubuntu.go:206] setting minikube options for container-runtime
	I1025 08:59:48.633577  135520 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:59:48.633677  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 08:59:48.650609  135520 main.go:141] libmachine: Using SSH client type: native
	I1025 08:59:48.650855  135520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1025 08:59:48.650879  135520 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 08:59:48.896016  135520 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 08:59:48.896042  135520 machine.go:96] duration metric: took 1.333533169s to provisionDockerMachine
	I1025 08:59:48.896053  135520 client.go:171] duration metric: took 10.295547813s to LocalClient.Create
	I1025 08:59:48.896069  135520 start.go:167] duration metric: took 10.295613144s to libmachine.API.Create "addons-273872"
	I1025 08:59:48.896077  135520 start.go:293] postStartSetup for "addons-273872" (driver="docker")
	I1025 08:59:48.896086  135520 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 08:59:48.896134  135520 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 08:59:48.896166  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 08:59:48.914499  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 08:59:49.015610  135520 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 08:59:49.019360  135520 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 08:59:49.019384  135520 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 08:59:49.019395  135520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 08:59:49.019448  135520 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 08:59:49.019471  135520 start.go:296] duration metric: took 123.388312ms for postStartSetup
	I1025 08:59:49.019754  135520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-273872
	I1025 08:59:49.037320  135520 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/config.json ...
	I1025 08:59:49.037647  135520 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 08:59:49.037698  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 08:59:49.054098  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 08:59:49.150172  135520 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 08:59:49.154363  135520 start.go:128] duration metric: took 10.555633922s to createHost
	I1025 08:59:49.154391  135520 start.go:83] releasing machines lock for "addons-273872", held for 10.555796085s
	I1025 08:59:49.154451  135520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-273872
	I1025 08:59:49.171634  135520 ssh_runner.go:195] Run: cat /version.json
	I1025 08:59:49.171674  135520 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 08:59:49.171677  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 08:59:49.171733  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 08:59:49.187878  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 08:59:49.188804  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 08:59:49.283719  135520 ssh_runner.go:195] Run: systemctl --version
	I1025 08:59:49.337939  135520 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 08:59:49.371322  135520 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 08:59:49.376105  135520 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 08:59:49.376166  135520 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 08:59:49.402180  135520 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 08:59:49.402206  135520 start.go:495] detecting cgroup driver to use...
	I1025 08:59:49.402242  135520 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 08:59:49.402297  135520 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 08:59:49.418034  135520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 08:59:49.430015  135520 docker.go:218] disabling cri-docker service (if available) ...
	I1025 08:59:49.430066  135520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 08:59:49.445750  135520 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 08:59:49.462443  135520 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 08:59:49.541701  135520 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 08:59:49.626683  135520 docker.go:234] disabling docker service ...
	I1025 08:59:49.626744  135520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 08:59:49.644249  135520 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 08:59:49.656610  135520 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 08:59:49.733509  135520 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 08:59:49.813722  135520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 08:59:49.826019  135520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 08:59:49.839597  135520 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 08:59:49.839664  135520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:59:49.849447  135520 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 08:59:49.849502  135520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:59:49.858219  135520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:59:49.866548  135520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:59:49.874871  135520 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 08:59:49.882850  135520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:59:49.891222  135520 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:59:49.904037  135520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:59:49.912859  135520 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 08:59:49.920257  135520 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 08:59:49.927498  135520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:59:50.003808  135520 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 08:59:50.101887  135520 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 08:59:50.101969  135520 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 08:59:50.105802  135520 start.go:563] Will wait 60s for crictl version
	I1025 08:59:50.105860  135520 ssh_runner.go:195] Run: which crictl
	I1025 08:59:50.109280  135520 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 08:59:50.133695  135520 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 08:59:50.133817  135520 ssh_runner.go:195] Run: crio --version
	I1025 08:59:50.159718  135520 ssh_runner.go:195] Run: crio --version
	I1025 08:59:50.187805  135520 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 08:59:50.188807  135520 cli_runner.go:164] Run: docker network inspect addons-273872 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 08:59:50.206638  135520 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 08:59:50.210584  135520 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 08:59:50.220445  135520 kubeadm.go:883] updating cluster {Name:addons-273872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-273872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 08:59:50.220557  135520 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:59:50.220621  135520 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 08:59:50.249434  135520 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 08:59:50.249455  135520 crio.go:433] Images already preloaded, skipping extraction
	I1025 08:59:50.249496  135520 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 08:59:50.273069  135520 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 08:59:50.273092  135520 cache_images.go:85] Images are preloaded, skipping loading
	I1025 08:59:50.273099  135520 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1025 08:59:50.273186  135520 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-273872 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-273872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 08:59:50.273245  135520 ssh_runner.go:195] Run: crio config
	I1025 08:59:50.315240  135520 cni.go:84] Creating CNI manager for ""
	I1025 08:59:50.315264  135520 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:59:50.315283  135520 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 08:59:50.315305  135520 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-273872 NodeName:addons-273872 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 08:59:50.315447  135520 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-273872"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 08:59:50.315505  135520 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 08:59:50.323728  135520 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 08:59:50.323789  135520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 08:59:50.331091  135520 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1025 08:59:50.343196  135520 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 08:59:50.357173  135520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1025 08:59:50.369011  135520 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 08:59:50.372604  135520 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 08:59:50.382469  135520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:59:50.461751  135520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 08:59:50.485767  135520 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872 for IP: 192.168.49.2
	I1025 08:59:50.485787  135520 certs.go:195] generating shared ca certs ...
	I1025 08:59:50.485807  135520 certs.go:227] acquiring lock for ca certs: {Name:mk84f00dc0ba6e3a6eb84ff47b0ea60692217fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:50.485937  135520 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key
	I1025 08:59:50.609190  135520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt ...
	I1025 08:59:50.609222  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt: {Name:mk2a0bf68b60a6c965e83a3989bb90c992cb6912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:50.609407  135520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key ...
	I1025 08:59:50.609420  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key: {Name:mk7108a76ea2395e018371973e19ff685f801980 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:50.609513  135520 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key
	I1025 08:59:50.640724  135520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt ...
	I1025 08:59:50.640752  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt: {Name:mkde7e06a909eaa2d04a061512dd265eed9be2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:50.640900  135520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key ...
	I1025 08:59:50.640911  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key: {Name:mk25b2121521c2fb0bd2ad6475a236cb9b85a15e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:50.640971  135520 certs.go:257] generating profile certs ...
	I1025 08:59:50.641023  135520 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.key
	I1025 08:59:50.641038  135520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt with IP's: []
	I1025 08:59:50.958213  135520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt ...
	I1025 08:59:50.958244  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: {Name:mk55607e53c3612a6c4997d35a7ebbdb7769e0b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:50.958435  135520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.key ...
	I1025 08:59:50.958449  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.key: {Name:mkb5eaf77dde3c7dbe83c49037b5eea1c43d9e0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:50.958518  135520 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.key.0f3ef996
	I1025 08:59:50.958544  135520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.crt.0f3ef996 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1025 08:59:51.075249  135520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.crt.0f3ef996 ...
	I1025 08:59:51.075281  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.crt.0f3ef996: {Name:mk73b952c38b492e0b6068e78abffccb1b670107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:51.075464  135520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.key.0f3ef996 ...
	I1025 08:59:51.075478  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.key.0f3ef996: {Name:mkb228675fe9a428abb98e87fb3540e4af1636d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:51.075552  135520 certs.go:382] copying /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.crt.0f3ef996 -> /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.crt
	I1025 08:59:51.075626  135520 certs.go:386] copying /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.key.0f3ef996 -> /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.key
	I1025 08:59:51.075672  135520 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/proxy-client.key
	I1025 08:59:51.075690  135520 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/proxy-client.crt with IP's: []
	I1025 08:59:51.372241  135520 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/proxy-client.crt ...
	I1025 08:59:51.372272  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/proxy-client.crt: {Name:mkff79941901f1aad29cee168c50a75a1746f900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:51.372453  135520 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/proxy-client.key ...
	I1025 08:59:51.372467  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/proxy-client.key: {Name:mk8e9331221bd4594930359cb7deb1ed51b45a61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:59:51.372706  135520 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 08:59:51.372744  135520 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem (1078 bytes)
	I1025 08:59:51.372770  135520 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem (1123 bytes)
	I1025 08:59:51.372791  135520 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem (1675 bytes)
	I1025 08:59:51.373379  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 08:59:51.392061  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 08:59:51.410505  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 08:59:51.429410  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 08:59:51.447021  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 08:59:51.463912  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 08:59:51.480432  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 08:59:51.497318  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 08:59:51.514007  135520 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 08:59:51.532115  135520 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 08:59:51.544185  135520 ssh_runner.go:195] Run: openssl version
	I1025 08:59:51.550243  135520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 08:59:51.560546  135520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:59:51.564135  135520 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:59 /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:59:51.564182  135520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:59:51.597887  135520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 08:59:51.606635  135520 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 08:59:51.610303  135520 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 08:59:51.610379  135520 kubeadm.go:400] StartCluster: {Name:addons-273872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-273872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:59:51.610448  135520 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:59:51.610488  135520 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:59:51.637025  135520 cri.go:89] found id: ""
	I1025 08:59:51.637117  135520 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 08:59:51.645244  135520 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 08:59:51.653152  135520 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 08:59:51.653209  135520 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 08:59:51.660893  135520 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 08:59:51.660909  135520 kubeadm.go:157] found existing configuration files:
	
	I1025 08:59:51.660953  135520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 08:59:51.668309  135520 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 08:59:51.668389  135520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 08:59:51.675797  135520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 08:59:51.683097  135520 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 08:59:51.683158  135520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 08:59:51.690436  135520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 08:59:51.697971  135520 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 08:59:51.698017  135520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 08:59:51.705403  135520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 08:59:51.712771  135520 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 08:59:51.712842  135520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 08:59:51.720032  135520 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 08:59:51.756375  135520 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 08:59:51.756444  135520 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 08:59:51.775660  135520 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 08:59:51.775749  135520 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 08:59:51.775809  135520 kubeadm.go:318] OS: Linux
	I1025 08:59:51.775886  135520 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 08:59:51.775968  135520 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 08:59:51.776049  135520 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 08:59:51.776124  135520 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 08:59:51.776205  135520 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 08:59:51.776273  135520 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 08:59:51.776324  135520 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 08:59:51.776411  135520 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 08:59:51.829988  135520 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 08:59:51.830125  135520 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 08:59:51.830261  135520 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 08:59:51.837375  135520 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 08:59:51.840056  135520 out.go:252]   - Generating certificates and keys ...
	I1025 08:59:51.840147  135520 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 08:59:51.840233  135520 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 08:59:52.084318  135520 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 08:59:52.116459  135520 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 08:59:52.410394  135520 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 08:59:52.684485  135520 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 08:59:52.822848  135520 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 08:59:52.823001  135520 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-273872 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 08:59:52.962777  135520 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 08:59:52.962982  135520 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-273872 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 08:59:53.026827  135520 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 08:59:53.244992  135520 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 08:59:53.335321  135520 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 08:59:53.335829  135520 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 08:59:53.899141  135520 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 08:59:54.591086  135520 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 08:59:54.892096  135520 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 08:59:55.113050  135520 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 08:59:55.260915  135520 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 08:59:55.261456  135520 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 08:59:55.265317  135520 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 08:59:55.267645  135520 out.go:252]   - Booting up control plane ...
	I1025 08:59:55.267762  135520 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 08:59:55.267872  135520 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 08:59:55.267973  135520 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 08:59:55.281116  135520 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 08:59:55.281254  135520 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 08:59:55.287559  135520 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 08:59:55.287781  135520 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 08:59:55.287845  135520 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 08:59:55.383491  135520 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 08:59:55.383630  135520 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 08:59:55.885230  135520 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.887408ms
	I1025 08:59:55.890337  135520 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 08:59:55.890489  135520 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1025 08:59:55.890585  135520 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 08:59:55.890653  135520 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 08:59:57.294507  135520 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.404050454s
	I1025 08:59:57.986132  135520 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.095773935s
	I1025 08:59:59.891487  135520 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001103158s
	I1025 08:59:59.902273  135520 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 08:59:59.911531  135520 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 08:59:59.919083  135520 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 08:59:59.919388  135520 kubeadm.go:318] [mark-control-plane] Marking the node addons-273872 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 08:59:59.926571  135520 kubeadm.go:318] [bootstrap-token] Using token: daokbe.xkqhffctwdfi006u
	I1025 08:59:59.928127  135520 out.go:252]   - Configuring RBAC rules ...
	I1025 08:59:59.928291  135520 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 08:59:59.930740  135520 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 08:59:59.935254  135520 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 08:59:59.937393  135520 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 08:59:59.940446  135520 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 08:59:59.942575  135520 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:00:00.297338  135520 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:00:00.712376  135520 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:00:01.297327  135520 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:00:01.298205  135520 kubeadm.go:318] 
	I1025 09:00:01.298303  135520 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:00:01.298329  135520 kubeadm.go:318] 
	I1025 09:00:01.298444  135520 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:00:01.298476  135520 kubeadm.go:318] 
	I1025 09:00:01.298510  135520 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:00:01.298572  135520 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:00:01.298622  135520 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:00:01.298640  135520 kubeadm.go:318] 
	I1025 09:00:01.298722  135520 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:00:01.298733  135520 kubeadm.go:318] 
	I1025 09:00:01.298815  135520 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:00:01.298833  135520 kubeadm.go:318] 
	I1025 09:00:01.298898  135520 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:00:01.298985  135520 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:00:01.299059  135520 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:00:01.299066  135520 kubeadm.go:318] 
	I1025 09:00:01.299137  135520 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:00:01.299203  135520 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:00:01.299209  135520 kubeadm.go:318] 
	I1025 09:00:01.299279  135520 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token daokbe.xkqhffctwdfi006u \
	I1025 09:00:01.299410  135520 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:6e42eae48b755d443fba2bbd8cd2499bc8de14d7e81dc26af35578c948bc74ab \
	I1025 09:00:01.299431  135520 kubeadm.go:318] 	--control-plane 
	I1025 09:00:01.299437  135520 kubeadm.go:318] 
	I1025 09:00:01.299512  135520 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:00:01.299518  135520 kubeadm.go:318] 
	I1025 09:00:01.299590  135520 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token daokbe.xkqhffctwdfi006u \
	I1025 09:00:01.299688  135520 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:6e42eae48b755d443fba2bbd8cd2499bc8de14d7e81dc26af35578c948bc74ab 
	I1025 09:00:01.302088  135520 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 09:00:01.302224  135520 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:00:01.302255  135520 cni.go:84] Creating CNI manager for ""
	I1025 09:00:01.302272  135520 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:00:01.303895  135520 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:00:01.304869  135520 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:00:01.308989  135520 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:00:01.309004  135520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:00:01.321720  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:00:01.521299  135520 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:00:01.521489  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-273872 minikube.k8s.io/updated_at=2025_10_25T09_00_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53 minikube.k8s.io/name=addons-273872 minikube.k8s.io/primary=true
	I1025 09:00:01.521496  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:01.530534  135520 ops.go:34] apiserver oom_adj: -16
	I1025 09:00:01.605453  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:02.106336  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:02.605835  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:03.106554  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:03.605509  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:04.106585  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:04.606296  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:05.106475  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:05.606449  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:06.106298  135520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:00:06.168689  135520 kubeadm.go:1113] duration metric: took 4.647294753s to wait for elevateKubeSystemPrivileges
	I1025 09:00:06.168728  135520 kubeadm.go:402] duration metric: took 14.558354848s to StartCluster
	I1025 09:00:06.168753  135520 settings.go:142] acquiring lock: {Name:mke1e64be0ec6edf2eef6e52eb10d83b59bb8c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:00:06.168907  135520 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:00:06.169582  135520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:00:06.169806  135520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:00:06.169853  135520 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:00:06.169893  135520 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1025 09:00:06.170039  135520 addons.go:69] Setting yakd=true in profile "addons-273872"
	I1025 09:00:06.170065  135520 addons.go:69] Setting default-storageclass=true in profile "addons-273872"
	I1025 09:00:06.170088  135520 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-273872"
	I1025 09:00:06.170096  135520 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:00:06.170105  135520 addons.go:69] Setting cloud-spanner=true in profile "addons-273872"
	I1025 09:00:06.170111  135520 addons.go:69] Setting ingress-dns=true in profile "addons-273872"
	I1025 09:00:06.170122  135520 addons.go:238] Setting addon cloud-spanner=true in "addons-273872"
	I1025 09:00:06.170088  135520 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-273872"
	I1025 09:00:06.170112  135520 addons.go:69] Setting gcp-auth=true in profile "addons-273872"
	I1025 09:00:06.170070  135520 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-273872"
	I1025 09:00:06.170141  135520 addons.go:238] Setting addon ingress-dns=true in "addons-273872"
	I1025 09:00:06.170149  135520 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-273872"
	I1025 09:00:06.170167  135520 mustload.go:65] Loading cluster: addons-273872
	I1025 09:00:06.170178  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.170184  135520 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-273872"
	I1025 09:00:06.170194  135520 addons.go:69] Setting metrics-server=true in profile "addons-273872"
	I1025 09:00:06.170209  135520 addons.go:238] Setting addon metrics-server=true in "addons-273872"
	I1025 09:00:06.170211  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.170211  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.170225  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.170101  135520 addons.go:238] Setting addon yakd=true in "addons-273872"
	I1025 09:00:06.170411  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.170445  135520 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:00:06.170576  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.170687  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.170709  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.170712  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.170733  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.170769  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.170856  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.171041  135520 addons.go:69] Setting storage-provisioner=true in profile "addons-273872"
	I1025 09:00:06.171066  135520 addons.go:238] Setting addon storage-provisioner=true in "addons-273872"
	I1025 09:00:06.171115  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.170186  135520 addons.go:69] Setting inspektor-gadget=true in profile "addons-273872"
	I1025 09:00:06.171384  135520 addons.go:238] Setting addon inspektor-gadget=true in "addons-273872"
	I1025 09:00:06.171409  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.171640  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.171887  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.172252  135520 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-273872"
	I1025 09:00:06.172274  135520 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-273872"
	I1025 09:00:06.170178  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.172436  135520 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-273872"
	I1025 09:00:06.172463  135520 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-273872"
	I1025 09:00:06.172492  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.170041  135520 addons.go:69] Setting ingress=true in profile "addons-273872"
	I1025 09:00:06.173184  135520 addons.go:238] Setting addon ingress=true in "addons-273872"
	I1025 09:00:06.173220  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.173753  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.174286  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.172890  135520 addons.go:69] Setting registry=true in profile "addons-273872"
	I1025 09:00:06.174806  135520 addons.go:238] Setting addon registry=true in "addons-273872"
	I1025 09:00:06.174836  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.172914  135520 addons.go:69] Setting registry-creds=true in profile "addons-273872"
	I1025 09:00:06.177056  135520 addons.go:238] Setting addon registry-creds=true in "addons-273872"
	I1025 09:00:06.177088  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.172766  135520 out.go:179] * Verifying Kubernetes components...
	I1025 09:00:06.172925  135520 addons.go:69] Setting volumesnapshots=true in profile "addons-273872"
	I1025 09:00:06.177928  135520 addons.go:238] Setting addon volumesnapshots=true in "addons-273872"
	I1025 09:00:06.178003  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.172933  135520 addons.go:69] Setting volcano=true in profile "addons-273872"
	I1025 09:00:06.178417  135520 addons.go:238] Setting addon volcano=true in "addons-273872"
	I1025 09:00:06.178453  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.182889  135520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:00:06.183565  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.183801  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.184561  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.185446  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.189376  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.195433  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.229980  135520 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1025 09:00:06.232027  135520 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1025 09:00:06.232050  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 09:00:06.232111  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.238452  135520 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:00:06.239620  135520 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:00:06.239641  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:00:06.239699  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.240045  135520 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1025 09:00:06.244080  135520 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 09:00:06.245631  135520 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 09:00:06.245928  135520 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1025 09:00:06.246418  135520 addons.go:238] Setting addon default-storageclass=true in "addons-273872"
	I1025 09:00:06.246463  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.246954  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.247566  135520 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 09:00:06.247580  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1025 09:00:06.247635  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.249609  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.250188  135520 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 09:00:06.250208  135520 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 09:00:06.250155  135520 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 09:00:06.250268  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.255108  135520 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1025 09:00:06.258001  135520 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 09:00:06.260018  135520 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 09:00:06.262497  135520 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 09:00:06.264145  135520 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 09:00:06.264501  135520 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 09:00:06.264525  135520 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1025 09:00:06.264604  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.266374  135520 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 09:00:06.268973  135520 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 09:00:06.268991  135520 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 09:00:06.269066  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.286405  135520 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1025 09:00:06.290120  135520 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 09:00:06.290158  135520 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 09:00:06.290240  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.290427  135520 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-273872"
	I1025 09:00:06.290469  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:06.291029  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:06.303945  135520 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1025 09:00:06.305002  135520 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1025 09:00:06.305025  135520 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1025 09:00:06.305100  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.311912  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.314239  135520 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	W1025 09:00:06.314951  135520 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1025 09:00:06.315163  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.315577  135520 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 09:00:06.315591  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1025 09:00:06.315643  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.327382  135520 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1025 09:00:06.329260  135520 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 09:00:06.329280  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1025 09:00:06.329384  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.331166  135520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:00:06.335288  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.338239  135520 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1025 09:00:06.339039  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.339463  135520 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 09:00:06.339506  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 09:00:06.339621  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.351024  135520 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1025 09:00:06.354162  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.355373  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.355971  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.357549  135520 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:00:06.358690  135520 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:00:06.360160  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.360738  135520 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 09:00:06.360756  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1025 09:00:06.360817  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.361972  135520 out.go:179]   - Using image docker.io/registry:3.0.0
	I1025 09:00:06.363074  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.364148  135520 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1025 09:00:06.366422  135520 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 09:00:06.366486  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1025 09:00:06.366567  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.369140  135520 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:00:06.369158  135520 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:00:06.369215  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.391042  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.397312  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.401867  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.405153  135520 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 09:00:06.407474  135520 out.go:179]   - Using image docker.io/busybox:stable
	I1025 09:00:06.407490  135520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:00:06.408837  135520 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 09:00:06.408855  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 09:00:06.408911  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:06.417408  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.427766  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.447906  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:06.493000  135520 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:06.493029  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1025 09:00:06.514590  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 09:00:06.515596  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:06.518090  135520 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1025 09:00:06.518114  135520 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1025 09:00:06.524290  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:00:06.524761  135520 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 09:00:06.524780  135520 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 09:00:06.541143  135520 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1025 09:00:06.541167  135520 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1025 09:00:06.542563  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 09:00:06.550866  135520 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 09:00:06.550894  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 09:00:06.551778  135520 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 09:00:06.551809  135520 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 09:00:06.574320  135520 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 09:00:06.574425  135520 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 09:00:06.578006  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 09:00:06.586617  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 09:00:06.592375  135520 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 09:00:06.592402  135520 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 09:00:06.593129  135520 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1025 09:00:06.593147  135520 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1025 09:00:06.593264  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 09:00:06.595042  135520 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 09:00:06.595060  135520 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 09:00:06.601682  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:00:06.606219  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 09:00:06.611206  135520 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 09:00:06.611290  135520 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 09:00:06.636572  135520 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 09:00:06.636690  135520 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 09:00:06.638429  135520 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1025 09:00:06.638451  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1025 09:00:06.640276  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 09:00:06.647475  135520 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 09:00:06.647496  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 09:00:06.649766  135520 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 09:00:06.649850  135520 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 09:00:06.668553  135520 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 09:00:06.668585  135520 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 09:00:06.674388  135520 node_ready.go:35] waiting up to 6m0s for node "addons-273872" to be "Ready" ...
	I1025 09:00:06.674667  135520 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1025 09:00:06.690957  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 09:00:06.712142  135520 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:00:06.712166  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 09:00:06.715013  135520 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 09:00:06.715044  135520 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 09:00:06.716075  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 09:00:06.722091  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1025 09:00:06.779588  135520 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 09:00:06.779630  135520 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 09:00:06.804966  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:00:06.848537  135520 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 09:00:06.848590  135520 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 09:00:06.919869  135520 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 09:00:06.919895  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 09:00:06.993974  135520 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 09:00:06.994081  135520 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 09:00:07.036939  135520 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 09:00:07.036961  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1025 09:00:07.104211  135520 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 09:00:07.104233  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 09:00:07.174877  135520 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 09:00:07.174903  135520 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 09:00:07.184416  135520 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-273872" context rescaled to 1 replicas
	I1025 09:00:07.249159  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1025 09:00:07.456822  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:07.456870  135520 retry.go:31] will retry after 258.693862ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:07.716074  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:07.785206  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.207153235s)
	I1025 09:00:07.785246  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.198595794s)
	I1025 09:00:07.785275  135520 addons.go:479] Verifying addon ingress=true in "addons-273872"
	I1025 09:00:07.785323  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.192039858s)
	I1025 09:00:07.785426  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.179189577s)
	I1025 09:00:07.785400  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.183653599s)
	I1025 09:00:07.785497  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.145196435s)
	I1025 09:00:07.785611  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.094622689s)
	I1025 09:00:07.785637  135520 addons.go:479] Verifying addon metrics-server=true in "addons-273872"
	I1025 09:00:07.785654  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.069548643s)
	I1025 09:00:07.785683  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.063569403s)
	I1025 09:00:07.785685  135520 addons.go:479] Verifying addon registry=true in "addons-273872"
	I1025 09:00:07.786840  135520 out.go:179] * Verifying registry addon...
	I1025 09:00:07.786865  135520 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-273872 service yakd-dashboard -n yakd-dashboard
	
	I1025 09:00:07.786840  135520 out.go:179] * Verifying ingress addon...
	I1025 09:00:07.789654  135520 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 09:00:07.789942  135520 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1025 09:00:07.795775  135520 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1025 09:00:07.796171  135520 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 09:00:07.796212  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:07.895117  135520 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 09:00:07.895141  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:08.253237  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.44820013s)
	W1025 09:00:08.253288  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 09:00:08.253313  135520 retry.go:31] will retry after 359.915373ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 09:00:08.253480  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.004213665s)
	I1025 09:00:08.253520  135520 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-273872"
	I1025 09:00:08.255809  135520 out.go:179] * Verifying csi-hostpath-driver addon...
	I1025 09:00:08.258250  135520 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 09:00:08.262447  135520 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 09:00:08.262470  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:08.363297  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:08.363535  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:00:08.422933  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:08.422967  135520 retry.go:31] will retry after 382.154852ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:08.613813  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1025 09:00:08.677113  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:08.762077  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:08.792786  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:08.792925  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:08.806048  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:09.261929  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:09.362243  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:09.362405  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:09.761499  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:09.792920  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:09.793113  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:10.261625  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:10.362373  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:10.362513  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:00:10.677554  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:10.761151  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:10.792483  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:10.792547  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:11.088008  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.474140731s)
	I1025 09:00:11.088079  135520 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.282005281s)
	W1025 09:00:11.088101  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:11.088119  135520 retry.go:31] will retry after 338.329511ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:11.262591  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:11.293100  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:11.293250  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:11.427490  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:11.761240  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:11.792561  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:11.792713  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:00:11.956253  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:11.956284  135520 retry.go:31] will retry after 552.692268ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:12.262244  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:12.362505  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:12.362710  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:12.509894  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:12.761216  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:12.792746  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:12.792924  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:00:13.040558  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:13.040588  135520 retry.go:31] will retry after 1.629636383s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:00:13.177801  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:13.261810  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:13.293206  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:13.293438  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:13.761648  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:13.793177  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:13.793391  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:13.862588  135520 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 09:00:13.862676  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:13.880273  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:13.986299  135520 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 09:00:13.999113  135520 addons.go:238] Setting addon gcp-auth=true in "addons-273872"
	I1025 09:00:13.999169  135520 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:00:13.999651  135520 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:00:14.017222  135520 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 09:00:14.017272  135520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:00:14.034860  135520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:00:14.133231  135520 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1025 09:00:14.134416  135520 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:00:14.135470  135520 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 09:00:14.135487  135520 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 09:00:14.149184  135520 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 09:00:14.149204  135520 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 09:00:14.162144  135520 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 09:00:14.162167  135520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1025 09:00:14.174649  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 09:00:14.261606  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:14.293165  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:14.293256  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:14.471476  135520 addons.go:479] Verifying addon gcp-auth=true in "addons-273872"
	I1025 09:00:14.473197  135520 out.go:179] * Verifying gcp-auth addon...
	I1025 09:00:14.475026  135520 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 09:00:14.477407  135520 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 09:00:14.477426  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:14.670717  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:14.761882  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:14.792730  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:14.792859  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:14.978691  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:15.177865  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	W1025 09:00:15.196511  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:15.196539  135520 retry.go:31] will retry after 2.631982259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:15.261661  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:15.293110  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:15.293266  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:15.478407  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:15.762391  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:15.793036  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:15.793209  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:15.977725  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:16.261443  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:16.292905  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:16.293034  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:16.478448  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:16.761653  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:16.793278  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:16.793497  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:16.978116  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:17.177948  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:17.261451  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:17.292914  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:17.293173  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:17.477704  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:17.762303  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:17.792725  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:17.792899  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:17.828906  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:17.977972  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:18.261841  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:18.292688  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:18.292852  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:00:18.350884  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:18.350918  135520 retry.go:31] will retry after 2.879419058s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:18.478511  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:18.762173  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:18.792380  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:18.792556  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:18.977817  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:19.261864  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:19.292581  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:19.292673  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:19.478480  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:19.677047  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:19.762082  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:19.792746  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:19.792792  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:19.978422  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:20.261613  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:20.292842  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:20.292978  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:20.478519  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:20.761871  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:20.792317  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:20.792478  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:20.978236  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:21.231074  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:21.261685  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:21.293463  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:21.293577  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:21.477891  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:21.677865  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:21.761214  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:00:21.762799  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:21.762827  135520 retry.go:31] will retry after 2.17085207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:21.792564  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:21.792788  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:21.978230  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:22.261430  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:22.292703  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:22.292871  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:22.478637  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:22.761965  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:22.792431  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:22.792655  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:22.977936  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:23.261170  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:23.292692  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:23.292790  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:23.478631  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:23.761785  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:23.793466  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:23.793622  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:23.934713  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:23.978021  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:24.177637  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:24.261578  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:24.293309  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:24.293309  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1025 09:00:24.459248  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:24.459282  135520 retry.go:31] will retry after 8.224889013s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:24.477762  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:24.761056  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:24.792462  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:24.792619  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:24.978126  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:25.261498  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:25.293044  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:25.293074  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:25.477700  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:25.761214  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:25.792773  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:25.792851  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:25.978462  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:26.261650  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:26.293329  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:26.293404  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:26.477975  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:26.677575  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:26.761028  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:26.792698  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:26.792883  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:26.978283  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:27.261200  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:27.292606  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:27.292685  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:27.478394  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:27.761977  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:27.792276  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:27.792402  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:27.977819  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:28.260804  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:28.292181  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:28.292832  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:28.478414  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:28.677957  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:28.761490  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:28.793072  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:28.793151  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:28.978219  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:29.261232  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:29.292856  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:29.293071  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:29.479012  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:29.761983  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:29.792408  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:29.792507  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:29.978101  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:30.262099  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:30.292301  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:30.292456  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:30.478333  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:30.761735  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:30.792405  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:30.792922  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:30.978592  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:31.177549  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:31.260991  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:31.292725  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:31.292812  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:31.478611  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:31.762066  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:31.792579  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:31.792642  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:31.978123  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:32.261687  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:32.293106  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:32.293329  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:32.477655  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:32.684858  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:32.760774  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:32.792617  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:32.792723  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:32.977458  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:33.204224  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:33.204261  135520 retry.go:31] will retry after 11.723838383s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:33.260736  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:33.293165  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:33.293225  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:33.477819  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:33.677282  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:33.761856  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:33.792399  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:33.792528  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:33.978104  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:34.261322  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:34.292795  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:34.292957  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:34.478426  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:34.761447  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:34.793110  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:34.793262  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:34.977668  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:35.261647  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:35.293172  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:35.293365  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:35.478143  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:35.677777  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:35.761671  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:35.793229  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:35.793280  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:35.977835  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:36.261304  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:36.292916  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:36.293055  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:36.478161  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:36.761645  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:36.793179  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:36.793331  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:36.978271  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:37.261634  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:37.293450  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:37.293623  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:37.478170  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:37.677840  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:37.761508  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:37.793218  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:37.793342  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:37.978017  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:38.261235  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:38.292763  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:38.292997  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:38.478404  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:38.761490  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:38.793340  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:38.793363  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:38.977728  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:39.262177  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:39.292618  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:39.292645  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:39.478247  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:39.761174  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:39.792934  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:39.792962  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:39.978558  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:40.176859  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:40.261236  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:40.292882  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:40.292954  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:40.477707  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:40.761288  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:40.793084  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:40.793232  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:40.977462  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:41.261689  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:41.293196  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:41.293400  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:41.478061  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:41.761646  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:41.793264  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:41.793418  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:41.978046  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:42.177719  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:42.261290  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:42.292848  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:42.293027  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:42.478290  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:42.761845  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:42.792468  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:42.792476  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:42.978367  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:43.261636  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:43.293038  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:43.293155  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:43.477754  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:43.760901  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:43.792269  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:43.792379  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:43.978056  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:44.179453  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:44.260848  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:44.292630  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:44.292676  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:44.477603  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:44.760920  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:44.792787  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:44.792894  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:44.929089  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:44.978502  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:45.262021  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:45.292869  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:45.292913  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:00:45.459694  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:45.459725  135520 retry.go:31] will retry after 9.845798164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:45.478362  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:45.761898  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:45.792378  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:45.792733  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:45.978336  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:46.261720  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:46.293379  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:46.293421  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:46.478058  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:00:46.677874  135520 node_ready.go:57] node "addons-273872" has "Ready":"False" status (will retry)
	I1025 09:00:46.761518  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:46.793198  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:46.793277  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:46.978046  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:47.261651  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:47.293142  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:47.293376  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:47.477812  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:47.761080  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:47.792574  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:47.792771  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:47.978314  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:48.177607  135520 node_ready.go:49] node "addons-273872" is "Ready"
	I1025 09:00:48.177644  135520 node_ready.go:38] duration metric: took 41.503220016s for node "addons-273872" to be "Ready" ...
	I1025 09:00:48.177676  135520 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:00:48.177738  135520 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:00:48.196437  135520 api_server.go:72] duration metric: took 42.026542072s to wait for apiserver process to appear ...
	I1025 09:00:48.196469  135520 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:00:48.196501  135520 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 09:00:48.201371  135520 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 09:00:48.202255  135520 api_server.go:141] control plane version: v1.34.1
	I1025 09:00:48.202283  135520 api_server.go:131] duration metric: took 5.804933ms to wait for apiserver health ...
	I1025 09:00:48.202295  135520 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:00:48.205896  135520 system_pods.go:59] 20 kube-system pods found
	I1025 09:00:48.205922  135520 system_pods.go:61] "amd-gpu-device-plugin-p8cjx" [7df88268-84bc-4cef-97da-8345d34f20d3] Pending
	I1025 09:00:48.205927  135520 system_pods.go:61] "coredns-66bc5c9577-gnhvz" [67796c5e-4dcd-4172-ba92-ecc25b3c5414] Pending
	I1025 09:00:48.205931  135520 system_pods.go:61] "csi-hostpath-attacher-0" [7bccad85-6c5d-44e1-9233-41446de6398a] Pending
	I1025 09:00:48.205935  135520 system_pods.go:61] "csi-hostpath-resizer-0" [588ade2d-170f-4c01-b826-205218e4d48f] Pending
	I1025 09:00:48.205938  135520 system_pods.go:61] "csi-hostpathplugin-p89jc" [eb1f8157-cd16-4765-8677-21cbafc12beb] Pending
	I1025 09:00:48.205942  135520 system_pods.go:61] "etcd-addons-273872" [0edd4187-dc77-4982-b770-8190b76988fb] Running
	I1025 09:00:48.205946  135520 system_pods.go:61] "kindnet-x8plr" [39bc0880-5a63-47b5-b14a-3781d261f34c] Running
	I1025 09:00:48.205953  135520 system_pods.go:61] "kube-apiserver-addons-273872" [3097efdc-7bf0-41f4-9918-ca201dce37e3] Running
	I1025 09:00:48.205964  135520 system_pods.go:61] "kube-controller-manager-addons-273872" [9167ae2d-493a-4c44-b92d-2a728d1fe2b9] Running
	I1025 09:00:48.205974  135520 system_pods.go:61] "kube-ingress-dns-minikube" [77a37ede-f2e7-4344-a23b-57828fe944f2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:00:48.205980  135520 system_pods.go:61] "kube-proxy-fzsmf" [f65747a8-c743-4556-9204-2237e85f7161] Running
	I1025 09:00:48.205990  135520 system_pods.go:61] "kube-scheduler-addons-273872" [84dfeadb-16fd-460d-aab1-ce37af243e51] Running
	I1025 09:00:48.205997  135520 system_pods.go:61] "metrics-server-85b7d694d7-jm2zb" [bd49c1cd-fde4-48b8-9120-c799d302450e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:00:48.206002  135520 system_pods.go:61] "nvidia-device-plugin-daemonset-6dmpz" [bcd43d18-a3a8-4a82-9fc3-425548e2e636] Pending
	I1025 09:00:48.206028  135520 system_pods.go:61] "registry-6b586f9694-9qs7h" [ab90902a-730d-4265-a4c9-7e84180f5480] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:00:48.206034  135520 system_pods.go:61] "registry-creds-764b6fb674-7gfht" [10616cc6-5266-4eaf-b6cf-f732ba0431ed] Pending
	I1025 09:00:48.206038  135520 system_pods.go:61] "registry-proxy-s6vt6" [e6258e29-be09-4ade-b9f6-99c705fbac83] Pending
	I1025 09:00:48.206043  135520 system_pods.go:61] "snapshot-controller-7d9fbc56b8-sb8v4" [a3b37afa-feea-434e-8287-0cfab3e89fef] Pending
	I1025 09:00:48.206048  135520 system_pods.go:61] "snapshot-controller-7d9fbc56b8-thtbp" [58932137-1365-4542-b308-e09868a9098c] Pending
	I1025 09:00:48.206057  135520 system_pods.go:61] "storage-provisioner" [8c2cfee7-a45c-4a36-8c4a-10818c0656de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:00:48.206064  135520 system_pods.go:74] duration metric: took 3.762579ms to wait for pod list to return data ...
	I1025 09:00:48.206078  135520 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:00:48.208586  135520 default_sa.go:45] found service account: "default"
	I1025 09:00:48.208608  135520 default_sa.go:55] duration metric: took 2.523169ms for default service account to be created ...
	I1025 09:00:48.208617  135520 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:00:48.222237  135520 system_pods.go:86] 20 kube-system pods found
	I1025 09:00:48.222279  135520 system_pods.go:89] "amd-gpu-device-plugin-p8cjx" [7df88268-84bc-4cef-97da-8345d34f20d3] Pending
	I1025 09:00:48.222293  135520 system_pods.go:89] "coredns-66bc5c9577-gnhvz" [67796c5e-4dcd-4172-ba92-ecc25b3c5414] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:00:48.222299  135520 system_pods.go:89] "csi-hostpath-attacher-0" [7bccad85-6c5d-44e1-9233-41446de6398a] Pending
	I1025 09:00:48.222308  135520 system_pods.go:89] "csi-hostpath-resizer-0" [588ade2d-170f-4c01-b826-205218e4d48f] Pending
	I1025 09:00:48.222313  135520 system_pods.go:89] "csi-hostpathplugin-p89jc" [eb1f8157-cd16-4765-8677-21cbafc12beb] Pending
	I1025 09:00:48.222320  135520 system_pods.go:89] "etcd-addons-273872" [0edd4187-dc77-4982-b770-8190b76988fb] Running
	I1025 09:00:48.222336  135520 system_pods.go:89] "kindnet-x8plr" [39bc0880-5a63-47b5-b14a-3781d261f34c] Running
	I1025 09:00:48.222341  135520 system_pods.go:89] "kube-apiserver-addons-273872" [3097efdc-7bf0-41f4-9918-ca201dce37e3] Running
	I1025 09:00:48.222364  135520 system_pods.go:89] "kube-controller-manager-addons-273872" [9167ae2d-493a-4c44-b92d-2a728d1fe2b9] Running
	I1025 09:00:48.222375  135520 system_pods.go:89] "kube-ingress-dns-minikube" [77a37ede-f2e7-4344-a23b-57828fe944f2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:00:48.222381  135520 system_pods.go:89] "kube-proxy-fzsmf" [f65747a8-c743-4556-9204-2237e85f7161] Running
	I1025 09:00:48.222388  135520 system_pods.go:89] "kube-scheduler-addons-273872" [84dfeadb-16fd-460d-aab1-ce37af243e51] Running
	I1025 09:00:48.222400  135520 system_pods.go:89] "metrics-server-85b7d694d7-jm2zb" [bd49c1cd-fde4-48b8-9120-c799d302450e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:00:48.222410  135520 system_pods.go:89] "nvidia-device-plugin-daemonset-6dmpz" [bcd43d18-a3a8-4a82-9fc3-425548e2e636] Pending
	I1025 09:00:48.222418  135520 system_pods.go:89] "registry-6b586f9694-9qs7h" [ab90902a-730d-4265-a4c9-7e84180f5480] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:00:48.222429  135520 system_pods.go:89] "registry-creds-764b6fb674-7gfht" [10616cc6-5266-4eaf-b6cf-f732ba0431ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:00:48.222434  135520 system_pods.go:89] "registry-proxy-s6vt6" [e6258e29-be09-4ade-b9f6-99c705fbac83] Pending
	I1025 09:00:48.222445  135520 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sb8v4" [a3b37afa-feea-434e-8287-0cfab3e89fef] Pending
	I1025 09:00:48.222450  135520 system_pods.go:89] "snapshot-controller-7d9fbc56b8-thtbp" [58932137-1365-4542-b308-e09868a9098c] Pending
	I1025 09:00:48.222458  135520 system_pods.go:89] "storage-provisioner" [8c2cfee7-a45c-4a36-8c4a-10818c0656de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:00:48.222481  135520 retry.go:31] will retry after 188.968671ms: missing components: kube-dns
	I1025 09:00:48.260997  135520 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 09:00:48.261034  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:48.292313  135520 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 09:00:48.292335  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:48.292379  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:48.418636  135520 system_pods.go:86] 20 kube-system pods found
	I1025 09:00:48.418674  135520 system_pods.go:89] "amd-gpu-device-plugin-p8cjx" [7df88268-84bc-4cef-97da-8345d34f20d3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 09:00:48.418685  135520 system_pods.go:89] "coredns-66bc5c9577-gnhvz" [67796c5e-4dcd-4172-ba92-ecc25b3c5414] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:00:48.418694  135520 system_pods.go:89] "csi-hostpath-attacher-0" [7bccad85-6c5d-44e1-9233-41446de6398a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:00:48.418704  135520 system_pods.go:89] "csi-hostpath-resizer-0" [588ade2d-170f-4c01-b826-205218e4d48f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:00:48.418713  135520 system_pods.go:89] "csi-hostpathplugin-p89jc" [eb1f8157-cd16-4765-8677-21cbafc12beb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 09:00:48.418719  135520 system_pods.go:89] "etcd-addons-273872" [0edd4187-dc77-4982-b770-8190b76988fb] Running
	I1025 09:00:48.418727  135520 system_pods.go:89] "kindnet-x8plr" [39bc0880-5a63-47b5-b14a-3781d261f34c] Running
	I1025 09:00:48.418735  135520 system_pods.go:89] "kube-apiserver-addons-273872" [3097efdc-7bf0-41f4-9918-ca201dce37e3] Running
	I1025 09:00:48.418741  135520 system_pods.go:89] "kube-controller-manager-addons-273872" [9167ae2d-493a-4c44-b92d-2a728d1fe2b9] Running
	I1025 09:00:48.418755  135520 system_pods.go:89] "kube-ingress-dns-minikube" [77a37ede-f2e7-4344-a23b-57828fe944f2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:00:48.418764  135520 system_pods.go:89] "kube-proxy-fzsmf" [f65747a8-c743-4556-9204-2237e85f7161] Running
	I1025 09:00:48.418774  135520 system_pods.go:89] "kube-scheduler-addons-273872" [84dfeadb-16fd-460d-aab1-ce37af243e51] Running
	I1025 09:00:48.418792  135520 system_pods.go:89] "metrics-server-85b7d694d7-jm2zb" [bd49c1cd-fde4-48b8-9120-c799d302450e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:00:48.418805  135520 system_pods.go:89] "nvidia-device-plugin-daemonset-6dmpz" [bcd43d18-a3a8-4a82-9fc3-425548e2e636] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:00:48.418817  135520 system_pods.go:89] "registry-6b586f9694-9qs7h" [ab90902a-730d-4265-a4c9-7e84180f5480] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:00:48.418828  135520 system_pods.go:89] "registry-creds-764b6fb674-7gfht" [10616cc6-5266-4eaf-b6cf-f732ba0431ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:00:48.418839  135520 system_pods.go:89] "registry-proxy-s6vt6" [e6258e29-be09-4ade-b9f6-99c705fbac83] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:00:48.418849  135520 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sb8v4" [a3b37afa-feea-434e-8287-0cfab3e89fef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:00:48.418861  135520 system_pods.go:89] "snapshot-controller-7d9fbc56b8-thtbp" [58932137-1365-4542-b308-e09868a9098c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:00:48.418868  135520 system_pods.go:89] "storage-provisioner" [8c2cfee7-a45c-4a36-8c4a-10818c0656de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:00:48.418888  135520 retry.go:31] will retry after 254.310097ms: missing components: kube-dns
	I1025 09:00:48.517852  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:48.678137  135520 system_pods.go:86] 20 kube-system pods found
	I1025 09:00:48.678175  135520 system_pods.go:89] "amd-gpu-device-plugin-p8cjx" [7df88268-84bc-4cef-97da-8345d34f20d3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 09:00:48.678185  135520 system_pods.go:89] "coredns-66bc5c9577-gnhvz" [67796c5e-4dcd-4172-ba92-ecc25b3c5414] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:00:48.678199  135520 system_pods.go:89] "csi-hostpath-attacher-0" [7bccad85-6c5d-44e1-9233-41446de6398a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:00:48.678207  135520 system_pods.go:89] "csi-hostpath-resizer-0" [588ade2d-170f-4c01-b826-205218e4d48f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:00:48.678214  135520 system_pods.go:89] "csi-hostpathplugin-p89jc" [eb1f8157-cd16-4765-8677-21cbafc12beb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 09:00:48.678220  135520 system_pods.go:89] "etcd-addons-273872" [0edd4187-dc77-4982-b770-8190b76988fb] Running
	I1025 09:00:48.678225  135520 system_pods.go:89] "kindnet-x8plr" [39bc0880-5a63-47b5-b14a-3781d261f34c] Running
	I1025 09:00:48.678230  135520 system_pods.go:89] "kube-apiserver-addons-273872" [3097efdc-7bf0-41f4-9918-ca201dce37e3] Running
	I1025 09:00:48.678235  135520 system_pods.go:89] "kube-controller-manager-addons-273872" [9167ae2d-493a-4c44-b92d-2a728d1fe2b9] Running
	I1025 09:00:48.678252  135520 system_pods.go:89] "kube-ingress-dns-minikube" [77a37ede-f2e7-4344-a23b-57828fe944f2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:00:48.678258  135520 system_pods.go:89] "kube-proxy-fzsmf" [f65747a8-c743-4556-9204-2237e85f7161] Running
	I1025 09:00:48.678266  135520 system_pods.go:89] "kube-scheduler-addons-273872" [84dfeadb-16fd-460d-aab1-ce37af243e51] Running
	I1025 09:00:48.678274  135520 system_pods.go:89] "metrics-server-85b7d694d7-jm2zb" [bd49c1cd-fde4-48b8-9120-c799d302450e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:00:48.678282  135520 system_pods.go:89] "nvidia-device-plugin-daemonset-6dmpz" [bcd43d18-a3a8-4a82-9fc3-425548e2e636] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:00:48.678289  135520 system_pods.go:89] "registry-6b586f9694-9qs7h" [ab90902a-730d-4265-a4c9-7e84180f5480] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:00:48.678298  135520 system_pods.go:89] "registry-creds-764b6fb674-7gfht" [10616cc6-5266-4eaf-b6cf-f732ba0431ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:00:48.678305  135520 system_pods.go:89] "registry-proxy-s6vt6" [e6258e29-be09-4ade-b9f6-99c705fbac83] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:00:48.678315  135520 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sb8v4" [a3b37afa-feea-434e-8287-0cfab3e89fef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:00:48.678323  135520 system_pods.go:89] "snapshot-controller-7d9fbc56b8-thtbp" [58932137-1365-4542-b308-e09868a9098c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:00:48.678329  135520 system_pods.go:89] "storage-provisioner" [8c2cfee7-a45c-4a36-8c4a-10818c0656de] Running
	I1025 09:00:48.678340  135520 system_pods.go:126] duration metric: took 469.71697ms to wait for k8s-apps to be running ...
	I1025 09:00:48.678364  135520 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:00:48.678418  135520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:00:48.694439  135520 system_svc.go:56] duration metric: took 16.066599ms WaitForService to wait for kubelet
	I1025 09:00:48.694476  135520 kubeadm.go:586] duration metric: took 42.524583043s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:00:48.694495  135520 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:00:48.696730  135520 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:00:48.696754  135520 node_conditions.go:123] node cpu capacity is 8
	I1025 09:00:48.696766  135520 node_conditions.go:105] duration metric: took 2.26683ms to run NodePressure ...
	I1025 09:00:48.696786  135520 start.go:241] waiting for startup goroutines ...
	I1025 09:00:48.777284  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:48.792746  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:48.792952  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:48.978939  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:49.262476  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:49.293564  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:49.293664  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:49.479454  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:49.761468  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:49.793163  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:49.793175  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:49.978947  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:50.262106  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:50.292709  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:50.292888  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:50.478634  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:50.762289  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:50.793005  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:50.793047  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:50.980081  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:51.261511  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:51.293899  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:51.293925  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:51.478904  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:51.762816  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:51.863084  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:51.863279  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:51.979548  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:52.261786  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:52.293680  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:52.293728  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:52.478919  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:52.762713  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:52.793432  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:52.793597  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:52.978398  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:53.261561  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:53.293491  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:53.293776  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:53.479076  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:53.762622  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:53.793579  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:53.793794  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:53.978465  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:54.261721  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:54.293538  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:54.293685  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:54.478274  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:54.761708  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:54.862175  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:54.862215  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:54.978717  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:55.261985  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:55.293001  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:55.293112  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:55.306234  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:00:55.478822  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:55.762371  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:55.793376  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:55.793551  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:00:55.915829  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:55.915866  135520 retry.go:31] will retry after 24.295474081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:00:55.978675  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:56.262331  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:56.292983  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:56.293017  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:56.479028  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:56.762980  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:56.792447  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:56.792672  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:56.979051  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:57.262177  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:57.292995  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:57.293373  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:57.477710  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:57.762321  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:57.792996  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:57.793132  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:57.977477  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:58.261444  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:58.293071  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:58.293076  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:58.478894  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:58.763015  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:58.793976  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:58.794857  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:58.978778  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:59.261740  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:59.293284  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:59.293496  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:59.533648  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:00:59.762014  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:00:59.792642  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:00:59.792715  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:00:59.978495  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:00.261468  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:00.292999  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:00.293198  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:00.477816  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:00.762800  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:00.792601  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:00.792620  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:00.978310  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:01.261144  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:01.292993  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:01.293014  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:01.477385  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:01.761556  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:01.793020  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:01.793143  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:01.977934  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:02.262496  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:02.292846  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:02.292912  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:02.478491  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:02.761705  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:02.793154  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:02.793189  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:02.977935  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:03.261941  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:03.292820  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:03.292851  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:03.478501  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:03.761322  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:03.792849  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:03.792886  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:03.978709  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:04.262110  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:04.292466  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:04.292631  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:04.477978  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:04.762585  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:04.793028  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:04.793188  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:04.977668  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:05.261649  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:05.293440  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:05.293526  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:05.478214  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:05.761208  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:05.792907  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:05.792939  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:05.978288  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:06.261404  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:06.293011  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:06.293030  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:06.477800  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:06.762483  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:06.792997  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:06.793046  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:06.978475  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:07.261579  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:07.293714  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:07.293938  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:07.478559  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:07.762081  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:07.792837  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:07.792952  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:07.978814  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:08.262683  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:08.294340  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:08.294426  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:08.477762  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:08.761939  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:08.792395  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:08.793027  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:08.977973  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:09.262455  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:09.293897  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:09.293934  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:09.478737  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:09.762224  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:09.792929  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:09.793156  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:09.978743  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:10.261668  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:10.293303  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:10.293313  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:10.478285  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:10.761614  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:10.793253  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:10.793341  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:10.978073  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:11.262701  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:11.363562  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:11.363634  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:11.478045  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:11.761747  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:11.862626  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:11.862722  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:11.978087  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:12.262147  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:12.292698  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:12.292758  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:12.479438  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:12.761874  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:12.863465  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:12.863608  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:12.978505  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:13.261805  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:13.293764  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:13.293809  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:13.478605  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:13.761951  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:13.793533  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:13.793577  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:13.977715  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:14.261805  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:14.293281  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:14.293281  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:14.478396  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:14.761496  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:14.862450  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:14.862453  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:14.978284  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:15.261488  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:15.293450  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:15.293501  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:15.478386  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:15.762410  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:15.862137  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:15.862321  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:15.977783  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:16.262390  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:16.293255  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:16.293446  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:16.478061  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:16.762492  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:16.863058  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:16.863150  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:16.977517  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:17.261815  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:17.362748  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:17.362746  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:17.478287  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:17.760938  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:17.793804  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:17.794032  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:17.978691  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:18.261794  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:18.293047  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:18.293177  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:18.477336  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:18.761706  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:18.793156  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:18.793286  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:18.977963  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:19.261968  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:19.293014  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:19.293234  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:19.527700  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:19.762071  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:19.792399  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:19.792485  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:19.978390  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:20.211498  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:01:20.263544  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:20.297383  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:20.298193  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:20.479510  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:20.765725  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:20.797218  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:20.797396  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:20.979226  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:01:21.146603  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:01:21.146702  135520 retry.go:31] will retry after 22.055491936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:01:21.262472  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:21.293552  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:21.294595  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:21.478551  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:21.762374  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:21.794329  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:21.794559  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:21.978850  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:22.262476  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:22.293773  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:22.293826  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:22.478808  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:22.864058  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:22.864940  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:22.864954  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:23.106515  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:23.261327  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:23.292958  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:23.293131  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:23.478948  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:23.762366  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:23.793234  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:23.793307  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:23.978006  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:24.262110  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:24.293410  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:24.293490  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:24.478473  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:24.761758  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:24.793744  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:24.793793  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:24.979098  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:25.262284  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:25.292935  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:25.293267  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:25.478928  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:25.762417  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:25.793836  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:25.794800  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:25.979027  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:26.262465  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:26.293537  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:26.363835  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:26.478112  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:26.762957  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:26.793630  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:26.793636  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:26.978988  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:27.261951  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:27.292595  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:27.292643  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:27.478078  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:27.762042  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:27.792926  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:27.792994  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:27.978892  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:28.262276  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:28.293015  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:28.293224  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:28.478134  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:28.761462  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:28.793280  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:28.793332  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:28.978097  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:29.262098  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:29.293817  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:29.294088  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:29.478076  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:29.762214  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:29.793136  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:29.793240  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:29.978379  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:30.262472  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:30.293269  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:30.293342  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:30.478993  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:30.762097  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:30.792657  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:30.792889  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:30.978392  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:31.261149  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:31.292741  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:31.292966  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:31.478498  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:31.761647  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:31.792885  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:31.792972  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:31.978885  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:32.262780  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:32.293433  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:32.293473  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:32.479432  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:32.761277  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:32.793808  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:32.793989  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:32.978469  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:33.344038  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:33.344059  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:33.344038  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:33.478443  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:33.761910  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:33.793195  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:33.793230  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:33.978275  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:34.261053  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:34.292766  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:34.292772  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:34.478228  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:34.761435  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:34.793341  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:34.793399  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:01:34.978376  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:35.261563  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:35.293037  135520 kapi.go:107] duration metric: took 1m27.503089224s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 09:01:35.294037  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:35.477901  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:35.762342  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:35.793486  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:35.979033  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:36.261369  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:36.292995  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:36.478812  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:36.762335  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:36.792851  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:36.982031  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:37.260561  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:37.292642  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:37.477938  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:37.762115  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:37.792377  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:37.977886  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:38.261853  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:38.292508  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:38.478414  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:38.761645  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:38.793283  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:38.978289  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:39.261747  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:39.293117  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:39.478003  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:39.762084  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:39.792824  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:39.978270  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:40.262558  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:40.293681  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:40.477886  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:01:40.762215  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:40.793929  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:40.980253  135520 kapi.go:107] duration metric: took 1m26.505222003s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 09:01:40.982496  135520 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-273872 cluster.
	I1025 09:01:40.983825  135520 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 09:01:40.985009  135520 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 09:01:41.263717  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:41.294027  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:41.792423  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:41.793331  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:42.262282  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:42.293157  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:42.761506  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:42.793388  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:43.202687  135520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:01:43.263043  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:43.292863  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:43.762538  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:43.793484  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:01:43.924194  135520 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:01:43.924330  135520 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1025 09:01:44.261082  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:44.292675  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:44.762817  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:44.793948  135520 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:01:45.262324  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:45.292926  135520 kapi.go:107] duration metric: took 1m37.503271119s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 09:01:45.762468  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:46.262195  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:46.764737  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:47.262376  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:47.762064  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:48.262367  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:48.761950  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:49.261883  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:49.762615  135520 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:01:50.262159  135520 kapi.go:107] duration metric: took 1m42.003909786s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 09:01:50.263861  135520 out.go:179] * Enabled addons: cloud-spanner, storage-provisioner, amd-gpu-device-plugin, registry-creds, ingress-dns, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1025 09:01:50.264953  135520 addons.go:514] duration metric: took 1m44.09506125s for enable addons: enabled=[cloud-spanner storage-provisioner amd-gpu-device-plugin registry-creds ingress-dns nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1025 09:01:50.264994  135520 start.go:246] waiting for cluster config update ...
	I1025 09:01:50.265012  135520 start.go:255] writing updated cluster config ...
	I1025 09:01:50.265286  135520 ssh_runner.go:195] Run: rm -f paused
	I1025 09:01:50.269217  135520 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:01:50.272398  135520 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gnhvz" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:50.276111  135520 pod_ready.go:94] pod "coredns-66bc5c9577-gnhvz" is "Ready"
	I1025 09:01:50.276132  135520 pod_ready.go:86] duration metric: took 3.712979ms for pod "coredns-66bc5c9577-gnhvz" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:50.277821  135520 pod_ready.go:83] waiting for pod "etcd-addons-273872" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:50.281087  135520 pod_ready.go:94] pod "etcd-addons-273872" is "Ready"
	I1025 09:01:50.281104  135520 pod_ready.go:86] duration metric: took 3.266141ms for pod "etcd-addons-273872" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:50.282841  135520 pod_ready.go:83] waiting for pod "kube-apiserver-addons-273872" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:50.286066  135520 pod_ready.go:94] pod "kube-apiserver-addons-273872" is "Ready"
	I1025 09:01:50.286086  135520 pod_ready.go:86] duration metric: took 3.227704ms for pod "kube-apiserver-addons-273872" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:50.287611  135520 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-273872" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:50.673707  135520 pod_ready.go:94] pod "kube-controller-manager-addons-273872" is "Ready"
	I1025 09:01:50.673734  135520 pod_ready.go:86] duration metric: took 386.103591ms for pod "kube-controller-manager-addons-273872" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:50.874296  135520 pod_ready.go:83] waiting for pod "kube-proxy-fzsmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:51.272842  135520 pod_ready.go:94] pod "kube-proxy-fzsmf" is "Ready"
	I1025 09:01:51.272870  135520 pod_ready.go:86] duration metric: took 398.548365ms for pod "kube-proxy-fzsmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:51.473285  135520 pod_ready.go:83] waiting for pod "kube-scheduler-addons-273872" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:51.872963  135520 pod_ready.go:94] pod "kube-scheduler-addons-273872" is "Ready"
	I1025 09:01:51.872993  135520 pod_ready.go:86] duration metric: took 399.682236ms for pod "kube-scheduler-addons-273872" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:01:51.873004  135520 pod_ready.go:40] duration metric: took 1.603759497s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:01:51.919578  135520 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:01:51.922306  135520 out.go:179] * Done! kubectl is now configured to use "addons-273872" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:02:00 addons-273872 crio[769]: time="2025-10-25T09:02:00.534061405Z" level=info msg="Removing container: d442487e59df32340d92abf659e87c3f6d6338362e7fc842707f769a907bd5bc" id=5eb70218-f25f-432f-9df5-bdd3b5a328e2 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:02:00 addons-273872 crio[769]: time="2025-10-25T09:02:00.540404074Z" level=info msg="Removed container d442487e59df32340d92abf659e87c3f6d6338362e7fc842707f769a907bd5bc: gcp-auth/gcp-auth-certs-create-dcnjn/create" id=5eb70218-f25f-432f-9df5-bdd3b5a328e2 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:02:00 addons-273872 crio[769]: time="2025-10-25T09:02:00.542783805Z" level=info msg="Stopping pod sandbox: 5ed60b2856688eb6c6e9a173d2f14fed90989910b2dd37809e6165ea576addef" id=b0a2e60c-7dae-4476-85eb-a3af428137aa name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:02:00 addons-273872 crio[769]: time="2025-10-25T09:02:00.542820145Z" level=info msg="Stopped pod sandbox (already stopped): 5ed60b2856688eb6c6e9a173d2f14fed90989910b2dd37809e6165ea576addef" id=b0a2e60c-7dae-4476-85eb-a3af428137aa name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:02:00 addons-273872 crio[769]: time="2025-10-25T09:02:00.543221696Z" level=info msg="Removing pod sandbox: 5ed60b2856688eb6c6e9a173d2f14fed90989910b2dd37809e6165ea576addef" id=aa9ce6ba-33e3-4ab8-a540-8ea9867ef188 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:02:00 addons-273872 crio[769]: time="2025-10-25T09:02:00.546013816Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:02:00 addons-273872 crio[769]: time="2025-10-25T09:02:00.546063142Z" level=info msg="Removed pod sandbox: 5ed60b2856688eb6c6e9a173d2f14fed90989910b2dd37809e6165ea576addef" id=aa9ce6ba-33e3-4ab8-a540-8ea9867ef188 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:02:00 addons-273872 crio[769]: time="2025-10-25T09:02:00.546495622Z" level=info msg="Stopping pod sandbox: fa27696e51ac92d265bc7a0d95adb0da10eba59b60c2a86ce56e2300a98fcc24" id=ddbfd2af-7d43-48fd-b78f-eb7fe9dd14d0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:02:00 addons-273872 crio[769]: time="2025-10-25T09:02:00.546547507Z" level=info msg="Stopped pod sandbox (already stopped): fa27696e51ac92d265bc7a0d95adb0da10eba59b60c2a86ce56e2300a98fcc24" id=ddbfd2af-7d43-48fd-b78f-eb7fe9dd14d0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:02:00 addons-273872 crio[769]: time="2025-10-25T09:02:00.54683055Z" level=info msg="Removing pod sandbox: fa27696e51ac92d265bc7a0d95adb0da10eba59b60c2a86ce56e2300a98fcc24" id=9c07719b-ae82-4809-95c8-c8ec539b1360 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:02:00 addons-273872 crio[769]: time="2025-10-25T09:02:00.549436354Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:02:00 addons-273872 crio[769]: time="2025-10-25T09:02:00.549493562Z" level=info msg="Removed pod sandbox: fa27696e51ac92d265bc7a0d95adb0da10eba59b60c2a86ce56e2300a98fcc24" id=9c07719b-ae82-4809-95c8-c8ec539b1360 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:02:02 addons-273872 crio[769]: time="2025-10-25T09:02:02.732724965Z" level=info msg="Running pod sandbox: default/nginx/POD" id=5585559b-ad7a-484c-991e-7d850180b660 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:02:02 addons-273872 crio[769]: time="2025-10-25T09:02:02.732834271Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:02:02 addons-273872 crio[769]: time="2025-10-25T09:02:02.740596273Z" level=info msg="Got pod network &{Name:nginx Namespace:default ID:a47a200e149048f87ebe0437398e76b77f1c45e4713fa19f72446cd2368e9d6d UID:e0abdafe-c76b-4464-b70e-72d4f797a77c NetNS:/var/run/netns/c6a6fa31-0b16-4c4c-a5c6-b16950fb59dc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00053ade0}] Aliases:map[]}"
	Oct 25 09:02:02 addons-273872 crio[769]: time="2025-10-25T09:02:02.740641083Z" level=info msg="Adding pod default_nginx to CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:02:02 addons-273872 crio[769]: time="2025-10-25T09:02:02.754719658Z" level=info msg="Got pod network &{Name:nginx Namespace:default ID:a47a200e149048f87ebe0437398e76b77f1c45e4713fa19f72446cd2368e9d6d UID:e0abdafe-c76b-4464-b70e-72d4f797a77c NetNS:/var/run/netns/c6a6fa31-0b16-4c4c-a5c6-b16950fb59dc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00053ade0}] Aliases:map[]}"
	Oct 25 09:02:02 addons-273872 crio[769]: time="2025-10-25T09:02:02.75485833Z" level=info msg="Checking pod default_nginx for CNI network kindnet (type=ptp)"
	Oct 25 09:02:02 addons-273872 crio[769]: time="2025-10-25T09:02:02.755895993Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:02:02 addons-273872 crio[769]: time="2025-10-25T09:02:02.757105493Z" level=info msg="Ran pod sandbox a47a200e149048f87ebe0437398e76b77f1c45e4713fa19f72446cd2368e9d6d with infra container: default/nginx/POD" id=5585559b-ad7a-484c-991e-7d850180b660 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:02:02 addons-273872 crio[769]: time="2025-10-25T09:02:02.75881286Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=20ab4b89-4df7-424d-a42f-c472f45d6218 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:02:02 addons-273872 crio[769]: time="2025-10-25T09:02:02.758946468Z" level=info msg="Image docker.io/nginx:alpine not found" id=20ab4b89-4df7-424d-a42f-c472f45d6218 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:02:02 addons-273872 crio[769]: time="2025-10-25T09:02:02.758992344Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=20ab4b89-4df7-424d-a42f-c472f45d6218 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:02:02 addons-273872 crio[769]: time="2025-10-25T09:02:02.759612516Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=cb13484d-5884-4d50-a223-3ce07cfc592f name=/runtime.v1.ImageService/PullImage
	Oct 25 09:02:02 addons-273872 crio[769]: time="2025-10-25T09:02:02.767060406Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	1749e24523753       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   940b088fd28d4       busybox                                     default
	6acc989b2a222       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          13 seconds ago       Running             csi-snapshotter                          0                   209f02eecd80a       csi-hostpathplugin-p89jc                    kube-system
	3ef84406aa714       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          15 seconds ago       Running             csi-provisioner                          0                   209f02eecd80a       csi-hostpathplugin-p89jc                    kube-system
	bfbbb33612538       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            16 seconds ago       Running             liveness-probe                           0                   209f02eecd80a       csi-hostpathplugin-p89jc                    kube-system
	30d14efd00c17       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           17 seconds ago       Running             hostpath                                 0                   209f02eecd80a       csi-hostpathplugin-p89jc                    kube-system
	3c4dfd048ae14       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                18 seconds ago       Running             node-driver-registrar                    0                   209f02eecd80a       csi-hostpathplugin-p89jc                    kube-system
	f51be06bc9d9a       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             19 seconds ago       Running             controller                               0                   ad679187e79fa       ingress-nginx-controller-675c5ddd98-cdlhj   ingress-nginx
	faaf1bf843a53       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 22 seconds ago       Running             gcp-auth                                 0                   65acc4502e968       gcp-auth-78565c9fb4-bjgg6                   gcp-auth
	2f8611e2aa0a5       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            26 seconds ago       Running             gadget                                   0                   1da69cccffc52       gadget-w9btk                                gadget
	7ed2f0ed59548       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              29 seconds ago       Running             registry-proxy                           0                   671dfbc3079e8       registry-proxy-s6vt6                        kube-system
	9fc2a24b06ef7       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     32 seconds ago       Running             amd-gpu-device-plugin                    0                   29d19f77dbdad       amd-gpu-device-plugin-p8cjx                 kube-system
	36e423e3e9d3f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   34 seconds ago       Running             csi-external-health-monitor-controller   0                   209f02eecd80a       csi-hostpathplugin-p89jc                    kube-system
	8f0ebcd809044       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              35 seconds ago       Running             csi-resizer                              0                   54fafde4dff43       csi-hostpath-resizer-0                      kube-system
	428c8023af396       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     37 seconds ago       Running             nvidia-device-plugin-ctr                 0                   181959880ed6f       nvidia-device-plugin-daemonset-6dmpz        kube-system
	63ac188b24d3a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      46 seconds ago       Running             volume-snapshot-controller               0                   3118dd0712a37       snapshot-controller-7d9fbc56b8-sb8v4        kube-system
	d2cd04d0db0a9       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      46 seconds ago       Running             volume-snapshot-controller               0                   245307580b748       snapshot-controller-7d9fbc56b8-thtbp        kube-system
	99c81d2cbcf13       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             47 seconds ago       Running             csi-attacher                             0                   3d5d4c3375257       csi-hostpath-attacher-0                     kube-system
	75730271f6dcb       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              49 seconds ago       Running             yakd                                     0                   9e3ca0825d44e       yakd-dashboard-5ff678cb9-8sg9x              yakd-dashboard
	5bc8ff2063cf8       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             52 seconds ago       Running             local-path-provisioner                   0                   e2d9833da479f       local-path-provisioner-648f6765c9-8n6qc     local-path-storage
	97f200163e1f9       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             53 seconds ago       Exited              patch                                    1                   80b6573db0131       ingress-nginx-admission-patch-gvs8h         ingress-nginx
	3d0c32677f602       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   53 seconds ago       Exited              create                                   0                   95fdb6cafe5f8       ingress-nginx-admission-create-l8qdq        ingress-nginx
	4762e9db22498       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               54 seconds ago       Running             cloud-spanner-emulator                   0                   92b2a09ddbff3       cloud-spanner-emulator-86bd5cbb97-x46xr     default
	9fe9c1838c296       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   1d1830dbf3bae       registry-6b586f9694-9qs7h                   kube-system
	a768f7fc3ff87       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   fa21de2c894d2       kube-ingress-dns-minikube                   kube-system
	5123be046b86f       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   d6cffe5fc010a       metrics-server-85b7d694d7-jm2zb             kube-system
	0c53c0cc8c974       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   7d7627b4e7252       coredns-66bc5c9577-gnhvz                    kube-system
	f6a1623c75ccd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   7c3b51bbe148f       storage-provisioner                         kube-system
	856adda6d4a26       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   b92fca7a94ff2       kube-proxy-fzsmf                            kube-system
	b61ce248f4c77       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   9ddfdf290b4ba       kindnet-x8plr                               kube-system
	d47c77a17465c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             2 minutes ago        Running             kube-controller-manager                  0                   00e09e0e45579       kube-controller-manager-addons-273872       kube-system
	8ce2136d4288f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             2 minutes ago        Running             kube-apiserver                           0                   81c807f6e2343       kube-apiserver-addons-273872                kube-system
	274bb680de1b5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             2 minutes ago        Running             kube-scheduler                           0                   0775bed4e43fc       kube-scheduler-addons-273872                kube-system
	34b878e3a18d6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             2 minutes ago        Running             etcd                                     0                   94e1c19d814a7       etcd-addons-273872                          kube-system
	
	
	==> coredns [0c53c0cc8c97408e395761582dcb19a6bd13bdb6fdb20adbe17e7425844245e6] <==
	[INFO] 10.244.0.16:48401 - 8621 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.002982814s
	[INFO] 10.244.0.16:43139 - 52517 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000068287s
	[INFO] 10.244.0.16:43139 - 52828 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000092855s
	[INFO] 10.244.0.16:49987 - 12908 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000084607s
	[INFO] 10.244.0.16:49987 - 12661 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000117891s
	[INFO] 10.244.0.16:40222 - 41974 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000052223s
	[INFO] 10.244.0.16:40222 - 41534 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000092093s
	[INFO] 10.244.0.16:33525 - 5995 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000113316s
	[INFO] 10.244.0.16:33525 - 5854 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000153601s
	[INFO] 10.244.0.22:56688 - 49374 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000196177s
	[INFO] 10.244.0.22:58282 - 56528 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000338557s
	[INFO] 10.244.0.22:39089 - 19528 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129098s
	[INFO] 10.244.0.22:47532 - 6495 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000208754s
	[INFO] 10.244.0.22:43129 - 47127 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000134129s
	[INFO] 10.244.0.22:60363 - 34215 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119656s
	[INFO] 10.244.0.22:32921 - 28688 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.002990234s
	[INFO] 10.244.0.22:53001 - 1048 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004145688s
	[INFO] 10.244.0.22:36885 - 8980 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.005384932s
	[INFO] 10.244.0.22:47162 - 2185 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006321325s
	[INFO] 10.244.0.22:33420 - 7076 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004364661s
	[INFO] 10.244.0.22:41774 - 49004 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006786433s
	[INFO] 10.244.0.22:53211 - 13893 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004433016s
	[INFO] 10.244.0.22:46833 - 56818 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.01649052s
	[INFO] 10.244.0.22:41965 - 18753 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.000927067s
	[INFO] 10.244.0.22:41302 - 28425 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001032374s
	
	
	==> describe nodes <==
	Name:               addons-273872
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-273872
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=addons-273872
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_00_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-273872
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-273872"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 08:59:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-273872
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:01:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:02:02 +0000   Sat, 25 Oct 2025 08:59:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:02:02 +0000   Sat, 25 Oct 2025 08:59:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:02:02 +0000   Sat, 25 Oct 2025 08:59:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:02:02 +0000   Sat, 25 Oct 2025 09:00:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-273872
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                30c17162-2f74-4668-9bd8-3fa3eed59df9
	  Boot ID:                    69cac88c-fbae-449a-9884-8eb99653f5b9
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-86bd5cbb97-x46xr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  gadget                      gadget-w9btk                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  gcp-auth                    gcp-auth-78565c9fb4-bjgg6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-cdlhj    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         116s
	  kube-system                 amd-gpu-device-plugin-p8cjx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 coredns-66bc5c9577-gnhvz                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     117s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 csi-hostpathplugin-p89jc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 etcd-addons-273872                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m3s
	  kube-system                 kindnet-x8plr                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      118s
	  kube-system                 kube-apiserver-addons-273872                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-addons-273872        200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-fzsmf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-scheduler-addons-273872                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 metrics-server-85b7d694d7-jm2zb              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         116s
	  kube-system                 nvidia-device-plugin-daemonset-6dmpz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 registry-6b586f9694-9qs7h                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 registry-creds-764b6fb674-7gfht              0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 registry-proxy-s6vt6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 snapshot-controller-7d9fbc56b8-sb8v4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 snapshot-controller-7d9fbc56b8-thtbp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  local-path-storage          local-path-provisioner-648f6765c9-8n6qc      0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-8sg9x               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     116s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 115s  kube-proxy       
	  Normal  Starting                 2m3s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m3s  kubelet          Node addons-273872 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s  kubelet          Node addons-273872 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s  kubelet          Node addons-273872 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           119s  node-controller  Node addons-273872 event: Registered Node addons-273872 in Controller
	  Normal  NodeReady                75s   kubelet          Node addons-273872 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 3d c0 43 a9 42 08 06
	[ +29.006213] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000026] ll header: 00000000: ff ff ff ff ff ff be 43 b6 b7 da a7 08 06
	[  +1.084703] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e d7 dd db 3b 23 08 06
	[  +0.038938] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 26 b2 8f fb 9c 08 06
	[  +6.531873] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 99 7b f7 3c 04 08 06
	[Oct25 08:48] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e 63 4f 94 03 27 08 06
	[  +0.978509] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 ff 62 1d 28 b8 08 06
	[  +0.023774] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 63 3c 00 95 75 08 06
	[  +4.654609] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e d3 a0 1e 29 5b 08 06
	[Oct25 08:49] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea 8a 3c 53 b9 57 08 06
	[  +0.902860] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 31 67 c2 c2 7b 08 06
	[  +0.039423] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 1c f5 68 9f 00 08 06
	[  +4.451388] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0e 07 4a e3 be 93 08 06
	
	
	==> etcd [34b878e3a18d682bb517910ab586818dedf3985d76e5dfb859b8c455fef6342f] <==
	{"level":"warn","ts":"2025-10-25T08:59:57.567501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:00:08.841172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:00:08.848563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:00:34.977254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:00:34.983552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:00:35.008265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52896","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:01:19.942004Z","caller":"traceutil/trace.go:172","msg":"trace[385402812] transaction","detail":"{read_only:false; response_revision:1089; number_of_response:1; }","duration":"127.936139ms","start":"2025-10-25T09:01:19.814048Z","end":"2025-10-25T09:01:19.941984Z","steps":["trace[385402812] 'process raft request'  (duration: 127.768354ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:01:19.944124Z","caller":"traceutil/trace.go:172","msg":"trace[1503550808] transaction","detail":"{read_only:false; response_revision:1090; number_of_response:1; }","duration":"118.287648ms","start":"2025-10-25T09:01:19.825821Z","end":"2025-10-25T09:01:19.944108Z","steps":["trace[1503550808] 'process raft request'  (duration: 118.203528ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:01:22.862028Z","caller":"traceutil/trace.go:172","msg":"trace[110641307] linearizableReadLoop","detail":"{readStateIndex:1136; appliedIndex:1136; }","duration":"101.438047ms","start":"2025-10-25T09:01:22.760562Z","end":"2025-10-25T09:01:22.862001Z","steps":["trace[110641307] 'read index received'  (duration: 101.426419ms)","trace[110641307] 'applied index is now lower than readState.Index'  (duration: 9.94µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T09:01:22.862166Z","caller":"traceutil/trace.go:172","msg":"trace[2060150914] transaction","detail":"{read_only:false; response_revision:1104; number_of_response:1; }","duration":"111.216441ms","start":"2025-10-25T09:01:22.750932Z","end":"2025-10-25T09:01:22.862149Z","steps":["trace[2060150914] 'process raft request'  (duration: 111.096749ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:01:22.862144Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.568854ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T09:01:22.862309Z","caller":"traceutil/trace.go:172","msg":"trace[887181409] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1103; }","duration":"101.752743ms","start":"2025-10-25T09:01:22.760548Z","end":"2025-10-25T09:01:22.862301Z","steps":["trace[887181409] 'agreement among raft nodes before linearized reading'  (duration: 101.533774ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:01:23.105200Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.690578ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T09:01:23.105267Z","caller":"traceutil/trace.go:172","msg":"trace[1660278697] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1105; }","duration":"127.769442ms","start":"2025-10-25T09:01:22.977482Z","end":"2025-10-25T09:01:23.105252Z","steps":["trace[1660278697] 'range keys from in-memory index tree'  (duration: 127.603929ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:01:42.116698Z","caller":"traceutil/trace.go:172","msg":"trace[11700191] transaction","detail":"{read_only:false; response_revision:1185; number_of_response:1; }","duration":"102.644126ms","start":"2025-10-25T09:01:42.014036Z","end":"2025-10-25T09:01:42.116680Z","steps":["trace[11700191] 'process raft request'  (duration: 76.522583ms)","trace[11700191] 'compare'  (duration: 26.01942ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T09:01:49.113436Z","caller":"traceutil/trace.go:172","msg":"trace[1921208394] transaction","detail":"{read_only:false; response_revision:1227; number_of_response:1; }","duration":"102.717747ms","start":"2025-10-25T09:01:49.010697Z","end":"2025-10-25T09:01:49.113415Z","steps":["trace[1921208394] 'process raft request'  (duration: 102.583669ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:01:49.254160Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.672139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/gcp-auth/gcp-auth-certs-patch\" limit:1 ","response":"range_response_count:1 size:3214"}
	{"level":"info","ts":"2025-10-25T09:01:49.254250Z","caller":"traceutil/trace.go:172","msg":"trace[744852649] range","detail":"{range_begin:/registry/jobs/gcp-auth/gcp-auth-certs-patch; range_end:; response_count:1; response_revision:1227; }","duration":"158.788212ms","start":"2025-10-25T09:01:49.095442Z","end":"2025-10-25T09:01:49.254230Z","steps":["trace[744852649] 'agreement among raft nodes before linearized reading'  (duration: 82.107289ms)","trace[744852649] 'range keys from in-memory index tree'  (duration: 76.510222ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T09:01:49.254482Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.961204ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-5gx27\" limit:1 ","response":"range_response_count:1 size:4154"}
	{"level":"warn","ts":"2025-10-25T09:01:49.254584Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.785814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-create-dcnjn\" limit:1 ","response":"range_response_count:1 size:4158"}
	{"level":"info","ts":"2025-10-25T09:01:49.254643Z","caller":"traceutil/trace.go:172","msg":"trace[600584683] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-patch-5gx27; range_end:; response_count:1; response_revision:1227; }","duration":"159.138546ms","start":"2025-10-25T09:01:49.095489Z","end":"2025-10-25T09:01:49.254627Z","steps":["trace[600584683] 'agreement among raft nodes before linearized reading'  (duration: 82.044588ms)","trace[600584683] 'range keys from in-memory index tree'  (duration: 76.607087ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T09:01:49.254684Z","caller":"traceutil/trace.go:172","msg":"trace[254071811] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-create-dcnjn; range_end:; response_count:1; response_revision:1228; }","duration":"138.898117ms","start":"2025-10-25T09:01:49.115770Z","end":"2025-10-25T09:01:49.254668Z","steps":["trace[254071811] 'agreement among raft nodes before linearized reading'  (duration: 138.687442ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:01:49.254480Z","caller":"traceutil/trace.go:172","msg":"trace[963662017] transaction","detail":"{read_only:false; response_revision:1228; number_of_response:1; }","duration":"161.893534ms","start":"2025-10-25T09:01:49.092563Z","end":"2025-10-25T09:01:49.254457Z","steps":["trace[963662017] 'process raft request'  (duration: 85.020756ms)","trace[963662017] 'compare'  (duration: 76.617336ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T09:01:49.254630Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.841144ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" limit:1 ","response":"range_response_count:1 size:3215"}
	{"level":"info","ts":"2025-10-25T09:01:49.254819Z","caller":"traceutil/trace.go:172","msg":"trace[1352818719] range","detail":"{range_begin:/registry/jobs/gcp-auth/gcp-auth-certs-create; range_end:; response_count:1; response_revision:1228; }","duration":"139.03074ms","start":"2025-10-25T09:01:49.115776Z","end":"2025-10-25T09:01:49.254807Z","steps":["trace[1352818719] 'agreement among raft nodes before linearized reading'  (duration: 138.772308ms)"],"step_count":1}
	
	
	==> gcp-auth [faaf1bf843a53afa00f74d85e4bf45d6889a94f6a92148211d9bdb5f583ad0b1] <==
	2025/10/25 09:01:40 GCP Auth Webhook started!
	2025/10/25 09:01:52 Ready to marshal response ...
	2025/10/25 09:01:52 Ready to write response ...
	2025/10/25 09:01:52 Ready to marshal response ...
	2025/10/25 09:01:52 Ready to write response ...
	2025/10/25 09:01:52 Ready to marshal response ...
	2025/10/25 09:01:52 Ready to write response ...
	2025/10/25 09:02:02 Ready to marshal response ...
	2025/10/25 09:02:02 Ready to write response ...
	
	
	==> kernel <==
	 09:02:03 up 44 min,  0 user,  load average: 7.33, 2.56, 1.67
	Linux addons-273872 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b61ce248f4c774901b5b79e3a742ad5afdba36e0d2fa91f7059ea628af2578fa] <==
	I1025 09:00:07.798903       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:00:07.799103       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 09:00:37.799124       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:00:37.799135       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:00:37.799246       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:00:37.799285       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1025 09:00:39.299069       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:00:39.299094       1 metrics.go:72] Registering metrics
	I1025 09:00:39.299141       1 controller.go:711] "Syncing nftables rules"
	I1025 09:00:47.800968       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:00:47.801031       1 main.go:301] handling current node
	I1025 09:00:57.799268       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:00:57.799333       1 main.go:301] handling current node
	I1025 09:01:07.803459       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:01:07.803492       1 main.go:301] handling current node
	I1025 09:01:17.798544       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:01:17.798587       1 main.go:301] handling current node
	I1025 09:01:27.798725       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:01:27.798762       1 main.go:301] handling current node
	I1025 09:01:37.799377       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:01:37.799405       1 main.go:301] handling current node
	I1025 09:01:47.798612       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:01:47.798656       1 main.go:301] handling current node
	I1025 09:01:57.799441       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:01:57.799500       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8ce2136d4288fb4d8468a78bac8ea32ab90854d7bd4416ca9904da1040df01fa] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1025 09:00:51.699595       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.129.188:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.129.188:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.129.188:443: connect: connection refused" logger="UnhandledError"
	E1025 09:00:51.701684       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.129.188:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.129.188:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.129.188:443: connect: connection refused" logger="UnhandledError"
	W1025 09:00:52.700583       1 handler_proxy.go:99] no RequestInfo found in the context
	W1025 09:00:52.700601       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 09:00:52.700634       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1025 09:00:52.700651       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1025 09:00:52.700685       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1025 09:00:52.701824       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1025 09:00:56.712994       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 09:00:56.713042       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1025 09:00:56.713082       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.129.188:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.129.188:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1025 09:00:56.723600       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1025 09:02:01.599118       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59182: use of closed network connection
	E1025 09:02:01.751204       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59194: use of closed network connection
	I1025 09:02:02.264187       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1025 09:02:02.464752       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.222.47"}
	
	
	==> kube-controller-manager [d47c77a17465c61f43d01df2e570cf4f0920d4333585ba36bb3b062b0ad245b6] <==
	I1025 09:00:04.961216       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:00:04.962207       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:00:04.962229       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:00:04.962597       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:00:04.962623       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:00:04.962657       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:00:04.962706       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:00:04.962730       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:00:04.962785       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:00:04.962792       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:00:04.963092       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:00:04.963215       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:00:04.963691       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:00:04.967728       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:00:04.977887       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:00:04.986428       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1025 09:00:07.339509       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1025 09:00:34.971660       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1025 09:00:34.971798       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1025 09:00:34.971838       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1025 09:00:34.993579       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1025 09:00:34.996575       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1025 09:00:35.072847       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:00:35.097446       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:00:49.901371       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [856adda6d4a269f0840b32ee45117e16786dc583569513442f2836ffdeae8b23] <==
	I1025 09:00:07.372875       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:00:07.494779       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:00:07.596005       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:00:07.596052       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:00:07.596152       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:00:07.622141       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:00:07.622196       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:00:07.629090       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:00:07.629614       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:00:07.629659       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:00:07.631247       1 config.go:200] "Starting service config controller"
	I1025 09:00:07.632381       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:00:07.631706       1 config.go:309] "Starting node config controller"
	I1025 09:00:07.632421       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:00:07.632428       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:00:07.631919       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:00:07.632436       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:00:07.631935       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:00:07.632451       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:00:07.732699       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:00:07.732714       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:00:07.733460       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [274bb680de1b51fcc087361608941e440ab97122abfb1cdd94dbb7ad5d9f4afa] <==
	E1025 08:59:57.983340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 08:59:57.983527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 08:59:57.983557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 08:59:57.983595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 08:59:57.983618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 08:59:57.983629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 08:59:57.983653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 08:59:57.984226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 08:59:57.984582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 08:59:57.984610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 08:59:57.984608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 08:59:57.984765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 08:59:58.847305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 08:59:58.860538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 08:59:58.874627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 08:59:58.884845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 08:59:58.891793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 08:59:58.923966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 08:59:58.940085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 08:59:58.968242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 08:59:58.981848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 08:59:58.998859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 08:59:59.128439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 08:59:59.155498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1025 08:59:59.579712       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:01:28 addons-273872 kubelet[1284]: I1025 09:01:28.834286    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-6dmpz" podStartSLOduration=3.166411612 podStartE2EDuration="40.834269915s" podCreationTimestamp="2025-10-25 09:00:48 +0000 UTC" firstStartedPulling="2025-10-25 09:00:48.57739041 +0000 UTC m=+48.121497581" lastFinishedPulling="2025-10-25 09:01:26.245248726 +0000 UTC m=+85.789355884" observedRunningTime="2025-10-25 09:01:26.829452691 +0000 UTC m=+86.373559868" watchObservedRunningTime="2025-10-25 09:01:28.834269915 +0000 UTC m=+88.378377087"
	Oct 25 09:01:28 addons-273872 kubelet[1284]: I1025 09:01:28.834668    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpath-resizer-0" podStartSLOduration=41.641749164 podStartE2EDuration="1m20.834662807s" podCreationTimestamp="2025-10-25 09:00:08 +0000 UTC" firstStartedPulling="2025-10-25 09:00:48.577865795 +0000 UTC m=+48.121972965" lastFinishedPulling="2025-10-25 09:01:27.770779449 +0000 UTC m=+87.314886608" observedRunningTime="2025-10-25 09:01:28.833866501 +0000 UTC m=+88.377973677" watchObservedRunningTime="2025-10-25 09:01:28.834662807 +0000 UTC m=+88.378769982"
	Oct 25 09:01:30 addons-273872 kubelet[1284]: I1025 09:01:30.834926    1284 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-p8cjx" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:01:30 addons-273872 kubelet[1284]: I1025 09:01:30.844650    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-p8cjx" podStartSLOduration=1.085327516 podStartE2EDuration="42.844629871s" podCreationTimestamp="2025-10-25 09:00:48 +0000 UTC" firstStartedPulling="2025-10-25 09:00:48.587148955 +0000 UTC m=+48.131256126" lastFinishedPulling="2025-10-25 09:01:30.346451323 +0000 UTC m=+89.890558481" observedRunningTime="2025-10-25 09:01:30.843984758 +0000 UTC m=+90.388091934" watchObservedRunningTime="2025-10-25 09:01:30.844629871 +0000 UTC m=+90.388737047"
	Oct 25 09:01:31 addons-273872 kubelet[1284]: I1025 09:01:31.837572    1284 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-p8cjx" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:01:34 addons-273872 kubelet[1284]: I1025 09:01:34.848586    1284 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-s6vt6" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:01:34 addons-273872 kubelet[1284]: I1025 09:01:34.858830    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-s6vt6" podStartSLOduration=1.558922396 podStartE2EDuration="46.858810289s" podCreationTimestamp="2025-10-25 09:00:48 +0000 UTC" firstStartedPulling="2025-10-25 09:00:48.600914334 +0000 UTC m=+48.145021493" lastFinishedPulling="2025-10-25 09:01:33.900802214 +0000 UTC m=+93.444909386" observedRunningTime="2025-10-25 09:01:34.857930597 +0000 UTC m=+94.402037773" watchObservedRunningTime="2025-10-25 09:01:34.858810289 +0000 UTC m=+94.402917465"
	Oct 25 09:01:35 addons-273872 kubelet[1284]: I1025 09:01:35.851766    1284 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-s6vt6" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:01:36 addons-273872 kubelet[1284]: I1025 09:01:36.869338    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-w9btk" podStartSLOduration=65.00439796 podStartE2EDuration="1m29.869317041s" podCreationTimestamp="2025-10-25 09:00:07 +0000 UTC" firstStartedPulling="2025-10-25 09:01:11.760893481 +0000 UTC m=+71.305000640" lastFinishedPulling="2025-10-25 09:01:36.625812563 +0000 UTC m=+96.169919721" observedRunningTime="2025-10-25 09:01:36.869178951 +0000 UTC m=+96.413286129" watchObservedRunningTime="2025-10-25 09:01:36.869317041 +0000 UTC m=+96.413424218"
	Oct 25 09:01:40 addons-273872 kubelet[1284]: I1025 09:01:40.890200    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-bjgg6" podStartSLOduration=66.841691687 podStartE2EDuration="1m26.890177346s" podCreationTimestamp="2025-10-25 09:00:14 +0000 UTC" firstStartedPulling="2025-10-25 09:01:20.318578092 +0000 UTC m=+79.862685254" lastFinishedPulling="2025-10-25 09:01:40.367063737 +0000 UTC m=+99.911170913" observedRunningTime="2025-10-25 09:01:40.888542954 +0000 UTC m=+100.432650130" watchObservedRunningTime="2025-10-25 09:01:40.890177346 +0000 UTC m=+100.434284524"
	Oct 25 09:01:44 addons-273872 kubelet[1284]: I1025 09:01:44.904325    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-cdlhj" podStartSLOduration=73.97100866 podStartE2EDuration="1m37.904302239s" podCreationTimestamp="2025-10-25 09:00:07 +0000 UTC" firstStartedPulling="2025-10-25 09:01:20.33407555 +0000 UTC m=+79.878182710" lastFinishedPulling="2025-10-25 09:01:44.267369124 +0000 UTC m=+103.811476289" observedRunningTime="2025-10-25 09:01:44.903689357 +0000 UTC m=+104.447796535" watchObservedRunningTime="2025-10-25 09:01:44.904302239 +0000 UTC m=+104.448409417"
	Oct 25 09:01:46 addons-273872 kubelet[1284]: I1025 09:01:46.594550    1284 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 25 09:01:46 addons-273872 kubelet[1284]: I1025 09:01:46.594594    1284 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 25 09:01:49 addons-273872 kubelet[1284]: I1025 09:01:49.935460    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-p89jc" podStartSLOduration=1.117895887 podStartE2EDuration="1m1.935437223s" podCreationTimestamp="2025-10-25 09:00:48 +0000 UTC" firstStartedPulling="2025-10-25 09:00:48.584263839 +0000 UTC m=+48.128370996" lastFinishedPulling="2025-10-25 09:01:49.401805163 +0000 UTC m=+108.945912332" observedRunningTime="2025-10-25 09:01:49.933987557 +0000 UTC m=+109.478094753" watchObservedRunningTime="2025-10-25 09:01:49.935437223 +0000 UTC m=+109.479544400"
	Oct 25 09:01:50 addons-273872 kubelet[1284]: I1025 09:01:50.538841    1284 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c379a5c-21e7-4c6c-a695-914a7d5594a3" path="/var/lib/kubelet/pods/8c379a5c-21e7-4c6c-a695-914a7d5594a3/volumes"
	Oct 25 09:01:50 addons-273872 kubelet[1284]: I1025 09:01:50.539308    1284 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6990586-85ec-4b50-ae45-0251e44e96fc" path="/var/lib/kubelet/pods/e6990586-85ec-4b50-ae45-0251e44e96fc/volumes"
	Oct 25 09:01:51 addons-273872 kubelet[1284]: E1025 09:01:51.986964    1284 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 25 09:01:51 addons-273872 kubelet[1284]: E1025 09:01:51.987049    1284 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10616cc6-5266-4eaf-b6cf-f732ba0431ed-gcr-creds podName:10616cc6-5266-4eaf-b6cf-f732ba0431ed nodeName:}" failed. No retries permitted until 2025-10-25 09:02:55.987034083 +0000 UTC m=+175.531141237 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/10616cc6-5266-4eaf-b6cf-f732ba0431ed-gcr-creds") pod "registry-creds-764b6fb674-7gfht" (UID: "10616cc6-5266-4eaf-b6cf-f732ba0431ed") : secret "registry-creds-gcr" not found
	Oct 25 09:01:52 addons-273872 kubelet[1284]: I1025 09:01:52.490728    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pntgw\" (UniqueName: \"kubernetes.io/projected/aa24d212-b05e-42d4-9f1c-f48910024818-kube-api-access-pntgw\") pod \"busybox\" (UID: \"aa24d212-b05e-42d4-9f1c-f48910024818\") " pod="default/busybox"
	Oct 25 09:01:52 addons-273872 kubelet[1284]: I1025 09:01:52.490800    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/aa24d212-b05e-42d4-9f1c-f48910024818-gcp-creds\") pod \"busybox\" (UID: \"aa24d212-b05e-42d4-9f1c-f48910024818\") " pod="default/busybox"
	Oct 25 09:01:54 addons-273872 kubelet[1284]: I1025 09:01:54.952269    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.929573654 podStartE2EDuration="2.952250391s" podCreationTimestamp="2025-10-25 09:01:52 +0000 UTC" firstStartedPulling="2025-10-25 09:01:52.779098435 +0000 UTC m=+112.323205590" lastFinishedPulling="2025-10-25 09:01:54.801775153 +0000 UTC m=+114.345882327" observedRunningTime="2025-10-25 09:01:54.951635058 +0000 UTC m=+114.495742216" watchObservedRunningTime="2025-10-25 09:01:54.952250391 +0000 UTC m=+114.496357570"
	Oct 25 09:02:00 addons-273872 kubelet[1284]: I1025 09:02:00.524570    1284 scope.go:117] "RemoveContainer" containerID="4278d9a11e58040d002a158b3935a5eba1d9f1386aa17ce029c723f3257bff02"
	Oct 25 09:02:00 addons-273872 kubelet[1284]: I1025 09:02:00.532925    1284 scope.go:117] "RemoveContainer" containerID="d442487e59df32340d92abf659e87c3f6d6338362e7fc842707f769a907bd5bc"
	Oct 25 09:02:02 addons-273872 kubelet[1284]: I1025 09:02:02.469528    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm2nh\" (UniqueName: \"kubernetes.io/projected/e0abdafe-c76b-4464-b70e-72d4f797a77c-kube-api-access-tm2nh\") pod \"nginx\" (UID: \"e0abdafe-c76b-4464-b70e-72d4f797a77c\") " pod="default/nginx"
	Oct 25 09:02:02 addons-273872 kubelet[1284]: I1025 09:02:02.469597    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e0abdafe-c76b-4464-b70e-72d4f797a77c-gcp-creds\") pod \"nginx\" (UID: \"e0abdafe-c76b-4464-b70e-72d4f797a77c\") " pod="default/nginx"
	
	
	==> storage-provisioner [f6a1623c75ccd3731e08ba7c5cf4f2e2d4981b7012e2cb63e51a031c2d0839da] <==
	W1025 09:01:38.995869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:40.999743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:41.004055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:43.007230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:43.011151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:45.013947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:45.018139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:47.021200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:47.024921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:49.090422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:49.256684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:51.259746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:51.263426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:53.266294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:53.269844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:55.273092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:55.276645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:57.279651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:57.283215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:59.286269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:59.289991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:02:01.292769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:02:01.296309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:02:03.299765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:02:03.303115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-273872 -n addons-273872
helpers_test.go:269: (dbg) Run:  kubectl --context addons-273872 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx ingress-nginx-admission-create-l8qdq ingress-nginx-admission-patch-gvs8h registry-creds-764b6fb674-7gfht
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-273872 describe pod nginx ingress-nginx-admission-create-l8qdq ingress-nginx-admission-patch-gvs8h registry-creds-764b6fb674-7gfht
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-273872 describe pod nginx ingress-nginx-admission-create-l8qdq ingress-nginx-admission-patch-gvs8h registry-creds-764b6fb674-7gfht: exit status 1 (81.149785ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-273872/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:02:02 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tm2nh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tm2nh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/nginx to addons-273872
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/nginx:alpine"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-l8qdq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gvs8h" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-7gfht" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-273872 describe pod nginx ingress-nginx-admission-create-l8qdq ingress-nginx-admission-patch-gvs8h registry-creds-764b6fb674-7gfht: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-273872 addons disable headlamp --alsologtostderr -v=1: exit status 11 (306.91804ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:02:04.488542  145109 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:02:04.488688  145109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:04.488700  145109 out.go:374] Setting ErrFile to fd 2...
	I1025 09:02:04.488706  145109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:04.488986  145109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:02:04.489367  145109 mustload.go:65] Loading cluster: addons-273872
	I1025 09:02:04.489845  145109 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:04.489868  145109 addons.go:606] checking whether the cluster is paused
	I1025 09:02:04.489987  145109 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:04.490003  145109 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:02:04.490527  145109 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:02:04.515022  145109 ssh_runner.go:195] Run: systemctl --version
	I1025 09:02:04.515087  145109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:02:04.544212  145109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:02:04.656754  145109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:02:04.656832  145109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:02:04.695415  145109 cri.go:89] found id: "6acc989b2a2225d89c0139e5d01a7c3e722a17bacc777211a249505e0c98dfde"
	I1025 09:02:04.695439  145109 cri.go:89] found id: "3ef84406aa71455b6de1dd991735b55151760349fe2324174f32003ba3bab3a6"
	I1025 09:02:04.695446  145109 cri.go:89] found id: "bfbbb33612538e8ef5fcb258abbd95f420b5a5ac465ecbcf4d458c6bc6e2e38e"
	I1025 09:02:04.695451  145109 cri.go:89] found id: "30d14efd00c17dff3baa060c7f2eacaea9fee261ffff3a40817d920c70f7a1b1"
	I1025 09:02:04.695456  145109 cri.go:89] found id: "3c4dfd048ae14042cb2dd535dfd35f2830a4290f9ef179dd30ae8ebba1c31a9e"
	I1025 09:02:04.695460  145109 cri.go:89] found id: "7ed2f0ed5954858ab8b256dd7a28ee29951b8dc0b80a0b3be518e80869d79f4f"
	I1025 09:02:04.695465  145109 cri.go:89] found id: "9fc2a24b06ef7a84582e95e03fcb1a9f5fa59ca6e653388015de5bce16b2098b"
	I1025 09:02:04.695468  145109 cri.go:89] found id: "36e423e3e9d3f8607f12ae97290100bad7e6a20a2f191e6f20e0a9dbd1c955bd"
	I1025 09:02:04.695472  145109 cri.go:89] found id: "8f0ebcd8090442d43ac07f440e77c7fb785f836534fea4fbd3af7f5a9d5c92a3"
	I1025 09:02:04.695489  145109 cri.go:89] found id: "428c8023af396511adb70251f87e08e7a0348af7ea7b391566b9f6d720846eae"
	I1025 09:02:04.695493  145109 cri.go:89] found id: "63ac188b24d3aece814f7965aeb3fc8826585e8716e0d9712e6c24c67de79b2e"
	I1025 09:02:04.695498  145109 cri.go:89] found id: "d2cd04d0db0a96294bc519f8d661edb9555660536f71eaa38a63faa12c9ecd60"
	I1025 09:02:04.695502  145109 cri.go:89] found id: "99c81d2cbcf1373b9e986edd9cb06fe6e17af80281bd513e9c184715993690af"
	I1025 09:02:04.695506  145109 cri.go:89] found id: "9fe9c1838c296605cfd15a7d3a82dcf768d949c52d59c1d953a6e6031f8e6bb0"
	I1025 09:02:04.695510  145109 cri.go:89] found id: "a768f7fc3ff87846a8a1fc193f45e90d420c2384014e24853286ce24205e39e9"
	I1025 09:02:04.695515  145109 cri.go:89] found id: "5123be046b86f9088a95642cee7771736e4aa6d00228c4b52c9ce8fe6fc983d1"
	I1025 09:02:04.695522  145109 cri.go:89] found id: "0c53c0cc8c97408e395761582dcb19a6bd13bdb6fdb20adbe17e7425844245e6"
	I1025 09:02:04.695526  145109 cri.go:89] found id: "f6a1623c75ccd3731e08ba7c5cf4f2e2d4981b7012e2cb63e51a031c2d0839da"
	I1025 09:02:04.695530  145109 cri.go:89] found id: "856adda6d4a269f0840b32ee45117e16786dc583569513442f2836ffdeae8b23"
	I1025 09:02:04.695533  145109 cri.go:89] found id: "b61ce248f4c774901b5b79e3a742ad5afdba36e0d2fa91f7059ea628af2578fa"
	I1025 09:02:04.695540  145109 cri.go:89] found id: "d47c77a17465c61f43d01df2e570cf4f0920d4333585ba36bb3b062b0ad245b6"
	I1025 09:02:04.695544  145109 cri.go:89] found id: "8ce2136d4288fb4d8468a78bac8ea32ab90854d7bd4416ca9904da1040df01fa"
	I1025 09:02:04.695548  145109 cri.go:89] found id: "274bb680de1b51fcc087361608941e440ab97122abfb1cdd94dbb7ad5d9f4afa"
	I1025 09:02:04.695551  145109 cri.go:89] found id: "34b878e3a18d682bb517910ab586818dedf3985d76e5dfb859b8c455fef6342f"
	I1025 09:02:04.695555  145109 cri.go:89] found id: ""
	I1025 09:02:04.695601  145109 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:02:04.713361  145109 out.go:203] 
	W1025 09:02:04.714920  145109 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:02:04.714941  145109 out.go:285] * 
	* 
	W1025 09:02:04.720092  145109 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:02:04.721359  145109 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-273872 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.72s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-x46xr" [62faca53-d8b5-4874-9901-b9c59e670bf1] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003192569s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-273872 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (242.227751ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:02:33.778235  147562 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:02:33.778549  147562 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:33.778560  147562 out.go:374] Setting ErrFile to fd 2...
	I1025 09:02:33.778564  147562 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:33.778741  147562 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:02:33.779016  147562 mustload.go:65] Loading cluster: addons-273872
	I1025 09:02:33.779319  147562 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:33.779333  147562 addons.go:606] checking whether the cluster is paused
	I1025 09:02:33.779437  147562 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:33.779458  147562 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:02:33.779814  147562 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:02:33.797503  147562 ssh_runner.go:195] Run: systemctl --version
	I1025 09:02:33.797572  147562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:02:33.815711  147562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:02:33.914000  147562 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:02:33.914082  147562 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:02:33.942266  147562 cri.go:89] found id: "6acc989b2a2225d89c0139e5d01a7c3e722a17bacc777211a249505e0c98dfde"
	I1025 09:02:33.942286  147562 cri.go:89] found id: "3ef84406aa71455b6de1dd991735b55151760349fe2324174f32003ba3bab3a6"
	I1025 09:02:33.942290  147562 cri.go:89] found id: "bfbbb33612538e8ef5fcb258abbd95f420b5a5ac465ecbcf4d458c6bc6e2e38e"
	I1025 09:02:33.942294  147562 cri.go:89] found id: "30d14efd00c17dff3baa060c7f2eacaea9fee261ffff3a40817d920c70f7a1b1"
	I1025 09:02:33.942297  147562 cri.go:89] found id: "3c4dfd048ae14042cb2dd535dfd35f2830a4290f9ef179dd30ae8ebba1c31a9e"
	I1025 09:02:33.942300  147562 cri.go:89] found id: "7ed2f0ed5954858ab8b256dd7a28ee29951b8dc0b80a0b3be518e80869d79f4f"
	I1025 09:02:33.942303  147562 cri.go:89] found id: "9fc2a24b06ef7a84582e95e03fcb1a9f5fa59ca6e653388015de5bce16b2098b"
	I1025 09:02:33.942305  147562 cri.go:89] found id: "36e423e3e9d3f8607f12ae97290100bad7e6a20a2f191e6f20e0a9dbd1c955bd"
	I1025 09:02:33.942307  147562 cri.go:89] found id: "8f0ebcd8090442d43ac07f440e77c7fb785f836534fea4fbd3af7f5a9d5c92a3"
	I1025 09:02:33.942312  147562 cri.go:89] found id: "428c8023af396511adb70251f87e08e7a0348af7ea7b391566b9f6d720846eae"
	I1025 09:02:33.942316  147562 cri.go:89] found id: "63ac188b24d3aece814f7965aeb3fc8826585e8716e0d9712e6c24c67de79b2e"
	I1025 09:02:33.942319  147562 cri.go:89] found id: "d2cd04d0db0a96294bc519f8d661edb9555660536f71eaa38a63faa12c9ecd60"
	I1025 09:02:33.942326  147562 cri.go:89] found id: "99c81d2cbcf1373b9e986edd9cb06fe6e17af80281bd513e9c184715993690af"
	I1025 09:02:33.942336  147562 cri.go:89] found id: "9fe9c1838c296605cfd15a7d3a82dcf768d949c52d59c1d953a6e6031f8e6bb0"
	I1025 09:02:33.942355  147562 cri.go:89] found id: "a768f7fc3ff87846a8a1fc193f45e90d420c2384014e24853286ce24205e39e9"
	I1025 09:02:33.942361  147562 cri.go:89] found id: "5123be046b86f9088a95642cee7771736e4aa6d00228c4b52c9ce8fe6fc983d1"
	I1025 09:02:33.942365  147562 cri.go:89] found id: "0c53c0cc8c97408e395761582dcb19a6bd13bdb6fdb20adbe17e7425844245e6"
	I1025 09:02:33.942371  147562 cri.go:89] found id: "f6a1623c75ccd3731e08ba7c5cf4f2e2d4981b7012e2cb63e51a031c2d0839da"
	I1025 09:02:33.942374  147562 cri.go:89] found id: "856adda6d4a269f0840b32ee45117e16786dc583569513442f2836ffdeae8b23"
	I1025 09:02:33.942377  147562 cri.go:89] found id: "b61ce248f4c774901b5b79e3a742ad5afdba36e0d2fa91f7059ea628af2578fa"
	I1025 09:02:33.942381  147562 cri.go:89] found id: "d47c77a17465c61f43d01df2e570cf4f0920d4333585ba36bb3b062b0ad245b6"
	I1025 09:02:33.942385  147562 cri.go:89] found id: "8ce2136d4288fb4d8468a78bac8ea32ab90854d7bd4416ca9904da1040df01fa"
	I1025 09:02:33.942389  147562 cri.go:89] found id: "274bb680de1b51fcc087361608941e440ab97122abfb1cdd94dbb7ad5d9f4afa"
	I1025 09:02:33.942393  147562 cri.go:89] found id: "34b878e3a18d682bb517910ab586818dedf3985d76e5dfb859b8c455fef6342f"
	I1025 09:02:33.942396  147562 cri.go:89] found id: ""
	I1025 09:02:33.942439  147562 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:02:33.956195  147562 out.go:203] 
	W1025 09:02:33.957370  147562 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:02:33.957408  147562 out.go:285] * 
	* 
	W1025 09:02:33.960759  147562 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:02:33.962017  147562 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-273872 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-273872 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-273872 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-273872 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-273872 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-273872 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-273872 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-273872 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-273872 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [105210e4-f93a-4ea2-828b-f14891f43916] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [105210e4-f93a-4ea2-828b-f14891f43916] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [105210e4-f93a-4ea2-828b-f14891f43916] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003386809s
addons_test.go:967: (dbg) Run:  kubectl --context addons-273872 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 ssh "cat /opt/local-path-provisioner/pvc-c6e0cb1d-628c-460d-83f5-992a360dc1c7_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-273872 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-273872 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-273872 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (253.514234ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:02:30.392723  147404 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:02:30.393019  147404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:30.393030  147404 out.go:374] Setting ErrFile to fd 2...
	I1025 09:02:30.393035  147404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:30.393247  147404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:02:30.393588  147404 mustload.go:65] Loading cluster: addons-273872
	I1025 09:02:30.393949  147404 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:30.393969  147404 addons.go:606] checking whether the cluster is paused
	I1025 09:02:30.394067  147404 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:30.394083  147404 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:02:30.394483  147404 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:02:30.412570  147404 ssh_runner.go:195] Run: systemctl --version
	I1025 09:02:30.412630  147404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:02:30.429501  147404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:02:30.528002  147404 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:02:30.528163  147404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:02:30.561330  147404 cri.go:89] found id: "6acc989b2a2225d89c0139e5d01a7c3e722a17bacc777211a249505e0c98dfde"
	I1025 09:02:30.561368  147404 cri.go:89] found id: "3ef84406aa71455b6de1dd991735b55151760349fe2324174f32003ba3bab3a6"
	I1025 09:02:30.561374  147404 cri.go:89] found id: "bfbbb33612538e8ef5fcb258abbd95f420b5a5ac465ecbcf4d458c6bc6e2e38e"
	I1025 09:02:30.561378  147404 cri.go:89] found id: "30d14efd00c17dff3baa060c7f2eacaea9fee261ffff3a40817d920c70f7a1b1"
	I1025 09:02:30.561383  147404 cri.go:89] found id: "3c4dfd048ae14042cb2dd535dfd35f2830a4290f9ef179dd30ae8ebba1c31a9e"
	I1025 09:02:30.561388  147404 cri.go:89] found id: "7ed2f0ed5954858ab8b256dd7a28ee29951b8dc0b80a0b3be518e80869d79f4f"
	I1025 09:02:30.561391  147404 cri.go:89] found id: "9fc2a24b06ef7a84582e95e03fcb1a9f5fa59ca6e653388015de5bce16b2098b"
	I1025 09:02:30.561395  147404 cri.go:89] found id: "36e423e3e9d3f8607f12ae97290100bad7e6a20a2f191e6f20e0a9dbd1c955bd"
	I1025 09:02:30.561398  147404 cri.go:89] found id: "8f0ebcd8090442d43ac07f440e77c7fb785f836534fea4fbd3af7f5a9d5c92a3"
	I1025 09:02:30.561406  147404 cri.go:89] found id: "428c8023af396511adb70251f87e08e7a0348af7ea7b391566b9f6d720846eae"
	I1025 09:02:30.561410  147404 cri.go:89] found id: "63ac188b24d3aece814f7965aeb3fc8826585e8716e0d9712e6c24c67de79b2e"
	I1025 09:02:30.561414  147404 cri.go:89] found id: "d2cd04d0db0a96294bc519f8d661edb9555660536f71eaa38a63faa12c9ecd60"
	I1025 09:02:30.561419  147404 cri.go:89] found id: "99c81d2cbcf1373b9e986edd9cb06fe6e17af80281bd513e9c184715993690af"
	I1025 09:02:30.561429  147404 cri.go:89] found id: "9fe9c1838c296605cfd15a7d3a82dcf768d949c52d59c1d953a6e6031f8e6bb0"
	I1025 09:02:30.561437  147404 cri.go:89] found id: "a768f7fc3ff87846a8a1fc193f45e90d420c2384014e24853286ce24205e39e9"
	I1025 09:02:30.561443  147404 cri.go:89] found id: "5123be046b86f9088a95642cee7771736e4aa6d00228c4b52c9ce8fe6fc983d1"
	I1025 09:02:30.561450  147404 cri.go:89] found id: "0c53c0cc8c97408e395761582dcb19a6bd13bdb6fdb20adbe17e7425844245e6"
	I1025 09:02:30.561455  147404 cri.go:89] found id: "f6a1623c75ccd3731e08ba7c5cf4f2e2d4981b7012e2cb63e51a031c2d0839da"
	I1025 09:02:30.561459  147404 cri.go:89] found id: "856adda6d4a269f0840b32ee45117e16786dc583569513442f2836ffdeae8b23"
	I1025 09:02:30.561463  147404 cri.go:89] found id: "b61ce248f4c774901b5b79e3a742ad5afdba36e0d2fa91f7059ea628af2578fa"
	I1025 09:02:30.561470  147404 cri.go:89] found id: "d47c77a17465c61f43d01df2e570cf4f0920d4333585ba36bb3b062b0ad245b6"
	I1025 09:02:30.561473  147404 cri.go:89] found id: "8ce2136d4288fb4d8468a78bac8ea32ab90854d7bd4416ca9904da1040df01fa"
	I1025 09:02:30.561480  147404 cri.go:89] found id: "274bb680de1b51fcc087361608941e440ab97122abfb1cdd94dbb7ad5d9f4afa"
	I1025 09:02:30.561483  147404 cri.go:89] found id: "34b878e3a18d682bb517910ab586818dedf3985d76e5dfb859b8c455fef6342f"
	I1025 09:02:30.561485  147404 cri.go:89] found id: ""
	I1025 09:02:30.561525  147404 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:02:30.578130  147404 out.go:203] 
	W1025 09:02:30.580076  147404 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:02:30.580101  147404 out.go:285] * 
	* 
	W1025 09:02:30.585299  147404 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:02:30.587115  147404 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-273872 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.12s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-6dmpz" [bcd43d18-a3a8-4a82-9fc3-425548e2e636] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0033087s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-273872 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (253.242297ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:02:12.369518  145977 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:02:12.369801  145977 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:12.369812  145977 out.go:374] Setting ErrFile to fd 2...
	I1025 09:02:12.369817  145977 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:12.370013  145977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:02:12.370274  145977 mustload.go:65] Loading cluster: addons-273872
	I1025 09:02:12.370648  145977 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:12.370667  145977 addons.go:606] checking whether the cluster is paused
	I1025 09:02:12.370749  145977 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:12.370764  145977 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:02:12.371139  145977 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:02:12.391506  145977 ssh_runner.go:195] Run: systemctl --version
	I1025 09:02:12.391603  145977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:02:12.409864  145977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:02:12.510997  145977 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:02:12.511060  145977 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:02:12.542630  145977 cri.go:89] found id: "6acc989b2a2225d89c0139e5d01a7c3e722a17bacc777211a249505e0c98dfde"
	I1025 09:02:12.542666  145977 cri.go:89] found id: "3ef84406aa71455b6de1dd991735b55151760349fe2324174f32003ba3bab3a6"
	I1025 09:02:12.542670  145977 cri.go:89] found id: "bfbbb33612538e8ef5fcb258abbd95f420b5a5ac465ecbcf4d458c6bc6e2e38e"
	I1025 09:02:12.542673  145977 cri.go:89] found id: "30d14efd00c17dff3baa060c7f2eacaea9fee261ffff3a40817d920c70f7a1b1"
	I1025 09:02:12.542675  145977 cri.go:89] found id: "3c4dfd048ae14042cb2dd535dfd35f2830a4290f9ef179dd30ae8ebba1c31a9e"
	I1025 09:02:12.542679  145977 cri.go:89] found id: "7ed2f0ed5954858ab8b256dd7a28ee29951b8dc0b80a0b3be518e80869d79f4f"
	I1025 09:02:12.542681  145977 cri.go:89] found id: "9fc2a24b06ef7a84582e95e03fcb1a9f5fa59ca6e653388015de5bce16b2098b"
	I1025 09:02:12.542683  145977 cri.go:89] found id: "36e423e3e9d3f8607f12ae97290100bad7e6a20a2f191e6f20e0a9dbd1c955bd"
	I1025 09:02:12.542686  145977 cri.go:89] found id: "8f0ebcd8090442d43ac07f440e77c7fb785f836534fea4fbd3af7f5a9d5c92a3"
	I1025 09:02:12.542695  145977 cri.go:89] found id: "428c8023af396511adb70251f87e08e7a0348af7ea7b391566b9f6d720846eae"
	I1025 09:02:12.542698  145977 cri.go:89] found id: "63ac188b24d3aece814f7965aeb3fc8826585e8716e0d9712e6c24c67de79b2e"
	I1025 09:02:12.542701  145977 cri.go:89] found id: "d2cd04d0db0a96294bc519f8d661edb9555660536f71eaa38a63faa12c9ecd60"
	I1025 09:02:12.542703  145977 cri.go:89] found id: "99c81d2cbcf1373b9e986edd9cb06fe6e17af80281bd513e9c184715993690af"
	I1025 09:02:12.542706  145977 cri.go:89] found id: "9fe9c1838c296605cfd15a7d3a82dcf768d949c52d59c1d953a6e6031f8e6bb0"
	I1025 09:02:12.542708  145977 cri.go:89] found id: "a768f7fc3ff87846a8a1fc193f45e90d420c2384014e24853286ce24205e39e9"
	I1025 09:02:12.542721  145977 cri.go:89] found id: "5123be046b86f9088a95642cee7771736e4aa6d00228c4b52c9ce8fe6fc983d1"
	I1025 09:02:12.542728  145977 cri.go:89] found id: "0c53c0cc8c97408e395761582dcb19a6bd13bdb6fdb20adbe17e7425844245e6"
	I1025 09:02:12.542733  145977 cri.go:89] found id: "f6a1623c75ccd3731e08ba7c5cf4f2e2d4981b7012e2cb63e51a031c2d0839da"
	I1025 09:02:12.542735  145977 cri.go:89] found id: "856adda6d4a269f0840b32ee45117e16786dc583569513442f2836ffdeae8b23"
	I1025 09:02:12.542737  145977 cri.go:89] found id: "b61ce248f4c774901b5b79e3a742ad5afdba36e0d2fa91f7059ea628af2578fa"
	I1025 09:02:12.542740  145977 cri.go:89] found id: "d47c77a17465c61f43d01df2e570cf4f0920d4333585ba36bb3b062b0ad245b6"
	I1025 09:02:12.542742  145977 cri.go:89] found id: "8ce2136d4288fb4d8468a78bac8ea32ab90854d7bd4416ca9904da1040df01fa"
	I1025 09:02:12.542744  145977 cri.go:89] found id: "274bb680de1b51fcc087361608941e440ab97122abfb1cdd94dbb7ad5d9f4afa"
	I1025 09:02:12.542747  145977 cri.go:89] found id: "34b878e3a18d682bb517910ab586818dedf3985d76e5dfb859b8c455fef6342f"
	I1025 09:02:12.542749  145977 cri.go:89] found id: ""
	I1025 09:02:12.542799  145977 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:02:12.557077  145977 out.go:203] 
	W1025 09:02:12.558239  145977 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:02:12.558256  145977 out.go:285] * 
	* 
	W1025 09:02:12.561846  145977 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:02:12.562996  145977 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-273872 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-8sg9x" [f14de185-1b98-427f-9c1e-9031f9a9a132] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003342412s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-273872 addons disable yakd --alsologtostderr -v=1: exit status 11 (241.95671ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:02:22.871799  146757 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:02:22.872077  146757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:22.872087  146757 out.go:374] Setting ErrFile to fd 2...
	I1025 09:02:22.872092  146757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:22.872267  146757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:02:22.872548  146757 mustload.go:65] Loading cluster: addons-273872
	I1025 09:02:22.872899  146757 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:22.872914  146757 addons.go:606] checking whether the cluster is paused
	I1025 09:02:22.872994  146757 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:22.873008  146757 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:02:22.873377  146757 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:02:22.891457  146757 ssh_runner.go:195] Run: systemctl --version
	I1025 09:02:22.891535  146757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:02:22.908589  146757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:02:23.007013  146757 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:02:23.007095  146757 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:02:23.036279  146757 cri.go:89] found id: "6acc989b2a2225d89c0139e5d01a7c3e722a17bacc777211a249505e0c98dfde"
	I1025 09:02:23.036302  146757 cri.go:89] found id: "3ef84406aa71455b6de1dd991735b55151760349fe2324174f32003ba3bab3a6"
	I1025 09:02:23.036306  146757 cri.go:89] found id: "bfbbb33612538e8ef5fcb258abbd95f420b5a5ac465ecbcf4d458c6bc6e2e38e"
	I1025 09:02:23.036309  146757 cri.go:89] found id: "30d14efd00c17dff3baa060c7f2eacaea9fee261ffff3a40817d920c70f7a1b1"
	I1025 09:02:23.036312  146757 cri.go:89] found id: "3c4dfd048ae14042cb2dd535dfd35f2830a4290f9ef179dd30ae8ebba1c31a9e"
	I1025 09:02:23.036316  146757 cri.go:89] found id: "7ed2f0ed5954858ab8b256dd7a28ee29951b8dc0b80a0b3be518e80869d79f4f"
	I1025 09:02:23.036319  146757 cri.go:89] found id: "9fc2a24b06ef7a84582e95e03fcb1a9f5fa59ca6e653388015de5bce16b2098b"
	I1025 09:02:23.036322  146757 cri.go:89] found id: "36e423e3e9d3f8607f12ae97290100bad7e6a20a2f191e6f20e0a9dbd1c955bd"
	I1025 09:02:23.036324  146757 cri.go:89] found id: "8f0ebcd8090442d43ac07f440e77c7fb785f836534fea4fbd3af7f5a9d5c92a3"
	I1025 09:02:23.036334  146757 cri.go:89] found id: "428c8023af396511adb70251f87e08e7a0348af7ea7b391566b9f6d720846eae"
	I1025 09:02:23.036337  146757 cri.go:89] found id: "63ac188b24d3aece814f7965aeb3fc8826585e8716e0d9712e6c24c67de79b2e"
	I1025 09:02:23.036339  146757 cri.go:89] found id: "d2cd04d0db0a96294bc519f8d661edb9555660536f71eaa38a63faa12c9ecd60"
	I1025 09:02:23.036341  146757 cri.go:89] found id: "99c81d2cbcf1373b9e986edd9cb06fe6e17af80281bd513e9c184715993690af"
	I1025 09:02:23.036359  146757 cri.go:89] found id: "9fe9c1838c296605cfd15a7d3a82dcf768d949c52d59c1d953a6e6031f8e6bb0"
	I1025 09:02:23.036363  146757 cri.go:89] found id: "a768f7fc3ff87846a8a1fc193f45e90d420c2384014e24853286ce24205e39e9"
	I1025 09:02:23.036383  146757 cri.go:89] found id: "5123be046b86f9088a95642cee7771736e4aa6d00228c4b52c9ce8fe6fc983d1"
	I1025 09:02:23.036392  146757 cri.go:89] found id: "0c53c0cc8c97408e395761582dcb19a6bd13bdb6fdb20adbe17e7425844245e6"
	I1025 09:02:23.036396  146757 cri.go:89] found id: "f6a1623c75ccd3731e08ba7c5cf4f2e2d4981b7012e2cb63e51a031c2d0839da"
	I1025 09:02:23.036399  146757 cri.go:89] found id: "856adda6d4a269f0840b32ee45117e16786dc583569513442f2836ffdeae8b23"
	I1025 09:02:23.036401  146757 cri.go:89] found id: "b61ce248f4c774901b5b79e3a742ad5afdba36e0d2fa91f7059ea628af2578fa"
	I1025 09:02:23.036404  146757 cri.go:89] found id: "d47c77a17465c61f43d01df2e570cf4f0920d4333585ba36bb3b062b0ad245b6"
	I1025 09:02:23.036406  146757 cri.go:89] found id: "8ce2136d4288fb4d8468a78bac8ea32ab90854d7bd4416ca9904da1040df01fa"
	I1025 09:02:23.036408  146757 cri.go:89] found id: "274bb680de1b51fcc087361608941e440ab97122abfb1cdd94dbb7ad5d9f4afa"
	I1025 09:02:23.036411  146757 cri.go:89] found id: "34b878e3a18d682bb517910ab586818dedf3985d76e5dfb859b8c455fef6342f"
	I1025 09:02:23.036413  146757 cri.go:89] found id: ""
	I1025 09:02:23.036467  146757 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:02:23.050309  146757 out.go:203] 
	W1025 09:02:23.051430  146757 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:02:23.051448  146757 out.go:285] * 
	* 
	W1025 09:02:23.054450  146757 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:02:23.055643  146757 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-273872 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.25s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-p8cjx" [7df88268-84bc-4cef-97da-8345d34f20d3] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003763586s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-273872 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-273872 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (241.413539ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:02:17.626078  146295 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:02:17.626370  146295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:17.626381  146295 out.go:374] Setting ErrFile to fd 2...
	I1025 09:02:17.626385  146295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:02:17.626595  146295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:02:17.626846  146295 mustload.go:65] Loading cluster: addons-273872
	I1025 09:02:17.627181  146295 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:17.627195  146295 addons.go:606] checking whether the cluster is paused
	I1025 09:02:17.627273  146295 config.go:182] Loaded profile config "addons-273872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:02:17.627284  146295 host.go:66] Checking if "addons-273872" exists ...
	I1025 09:02:17.627680  146295 cli_runner.go:164] Run: docker container inspect addons-273872 --format={{.State.Status}}
	I1025 09:02:17.644753  146295 ssh_runner.go:195] Run: systemctl --version
	I1025 09:02:17.644805  146295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-273872
	I1025 09:02:17.661818  146295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/addons-273872/id_rsa Username:docker}
	I1025 09:02:17.759963  146295 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:02:17.760026  146295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:02:17.789382  146295 cri.go:89] found id: "6acc989b2a2225d89c0139e5d01a7c3e722a17bacc777211a249505e0c98dfde"
	I1025 09:02:17.789403  146295 cri.go:89] found id: "3ef84406aa71455b6de1dd991735b55151760349fe2324174f32003ba3bab3a6"
	I1025 09:02:17.789407  146295 cri.go:89] found id: "bfbbb33612538e8ef5fcb258abbd95f420b5a5ac465ecbcf4d458c6bc6e2e38e"
	I1025 09:02:17.789410  146295 cri.go:89] found id: "30d14efd00c17dff3baa060c7f2eacaea9fee261ffff3a40817d920c70f7a1b1"
	I1025 09:02:17.789413  146295 cri.go:89] found id: "3c4dfd048ae14042cb2dd535dfd35f2830a4290f9ef179dd30ae8ebba1c31a9e"
	I1025 09:02:17.789416  146295 cri.go:89] found id: "7ed2f0ed5954858ab8b256dd7a28ee29951b8dc0b80a0b3be518e80869d79f4f"
	I1025 09:02:17.789418  146295 cri.go:89] found id: "9fc2a24b06ef7a84582e95e03fcb1a9f5fa59ca6e653388015de5bce16b2098b"
	I1025 09:02:17.789421  146295 cri.go:89] found id: "36e423e3e9d3f8607f12ae97290100bad7e6a20a2f191e6f20e0a9dbd1c955bd"
	I1025 09:02:17.789423  146295 cri.go:89] found id: "8f0ebcd8090442d43ac07f440e77c7fb785f836534fea4fbd3af7f5a9d5c92a3"
	I1025 09:02:17.789428  146295 cri.go:89] found id: "428c8023af396511adb70251f87e08e7a0348af7ea7b391566b9f6d720846eae"
	I1025 09:02:17.789430  146295 cri.go:89] found id: "63ac188b24d3aece814f7965aeb3fc8826585e8716e0d9712e6c24c67de79b2e"
	I1025 09:02:17.789432  146295 cri.go:89] found id: "d2cd04d0db0a96294bc519f8d661edb9555660536f71eaa38a63faa12c9ecd60"
	I1025 09:02:17.789435  146295 cri.go:89] found id: "99c81d2cbcf1373b9e986edd9cb06fe6e17af80281bd513e9c184715993690af"
	I1025 09:02:17.789437  146295 cri.go:89] found id: "9fe9c1838c296605cfd15a7d3a82dcf768d949c52d59c1d953a6e6031f8e6bb0"
	I1025 09:02:17.789440  146295 cri.go:89] found id: "a768f7fc3ff87846a8a1fc193f45e90d420c2384014e24853286ce24205e39e9"
	I1025 09:02:17.789460  146295 cri.go:89] found id: "5123be046b86f9088a95642cee7771736e4aa6d00228c4b52c9ce8fe6fc983d1"
	I1025 09:02:17.789470  146295 cri.go:89] found id: "0c53c0cc8c97408e395761582dcb19a6bd13bdb6fdb20adbe17e7425844245e6"
	I1025 09:02:17.789477  146295 cri.go:89] found id: "f6a1623c75ccd3731e08ba7c5cf4f2e2d4981b7012e2cb63e51a031c2d0839da"
	I1025 09:02:17.789481  146295 cri.go:89] found id: "856adda6d4a269f0840b32ee45117e16786dc583569513442f2836ffdeae8b23"
	I1025 09:02:17.789485  146295 cri.go:89] found id: "b61ce248f4c774901b5b79e3a742ad5afdba36e0d2fa91f7059ea628af2578fa"
	I1025 09:02:17.789488  146295 cri.go:89] found id: "d47c77a17465c61f43d01df2e570cf4f0920d4333585ba36bb3b062b0ad245b6"
	I1025 09:02:17.789491  146295 cri.go:89] found id: "8ce2136d4288fb4d8468a78bac8ea32ab90854d7bd4416ca9904da1040df01fa"
	I1025 09:02:17.789493  146295 cri.go:89] found id: "274bb680de1b51fcc087361608941e440ab97122abfb1cdd94dbb7ad5d9f4afa"
	I1025 09:02:17.789495  146295 cri.go:89] found id: "34b878e3a18d682bb517910ab586818dedf3985d76e5dfb859b8c455fef6342f"
	I1025 09:02:17.789497  146295 cri.go:89] found id: ""
	I1025 09:02:17.789540  146295 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:02:17.803621  146295 out.go:203] 
	W1025 09:02:17.804741  146295 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:02:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:02:17.804761  146295 out.go:285] * 
	* 
	W1025 09:02:17.807751  146295 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:02:17.809076  146295 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-273872 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-063906 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-063906 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-hdrbz" [e34b122f-285e-4d2f-8a5c-0d7b0abc50d2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-063906 -n functional-063906
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-25 09:18:13.69230974 +0000 UTC m=+1156.496482099
functional_test.go:1645: (dbg) Run:  kubectl --context functional-063906 describe po hello-node-connect-7d85dfc575-hdrbz -n default
functional_test.go:1645: (dbg) kubectl --context functional-063906 describe po hello-node-connect-7d85dfc575-hdrbz -n default:
Name:             hello-node-connect-7d85dfc575-hdrbz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-063906/192.168.49.2
Start Time:       Sat, 25 Oct 2025 09:08:13 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w9lxl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-w9lxl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hdrbz to functional-063906
Normal   Pulling    7m13s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m13s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m13s (x5 over 10m)     kubelet            Error: ErrImagePull
Normal   BackOff    4m54s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m54s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-063906 logs hello-node-connect-7d85dfc575-hdrbz -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-063906 logs hello-node-connect-7d85dfc575-hdrbz -n default: exit status 1 (61.60174ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-hdrbz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-063906 logs hello-node-connect-7d85dfc575-hdrbz -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-063906 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-hdrbz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-063906/192.168.49.2
Start Time:       Sat, 25 Oct 2025 09:08:13 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w9lxl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-w9lxl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hdrbz to functional-063906
Normal   Pulling    7m13s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m13s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m13s (x5 over 10m)     kubelet            Error: ErrImagePull
Normal   BackOff    4m54s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m54s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-063906 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-063906 logs -l app=hello-node-connect: exit status 1 (62.478258ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-hdrbz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-063906 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-063906 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.171.240
IPs:                      10.96.171.240
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30304/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-063906
helpers_test.go:243: (dbg) docker inspect functional-063906:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7fd3a16a66bdbeba6c87d79b511cf042eabe7e264620059b02e9e6eb7699e39a",
	        "Created": "2025-10-25T09:05:53.238187918Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 158058,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:05:53.274526189Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/7fd3a16a66bdbeba6c87d79b511cf042eabe7e264620059b02e9e6eb7699e39a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7fd3a16a66bdbeba6c87d79b511cf042eabe7e264620059b02e9e6eb7699e39a/hostname",
	        "HostsPath": "/var/lib/docker/containers/7fd3a16a66bdbeba6c87d79b511cf042eabe7e264620059b02e9e6eb7699e39a/hosts",
	        "LogPath": "/var/lib/docker/containers/7fd3a16a66bdbeba6c87d79b511cf042eabe7e264620059b02e9e6eb7699e39a/7fd3a16a66bdbeba6c87d79b511cf042eabe7e264620059b02e9e6eb7699e39a-json.log",
	        "Name": "/functional-063906",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-063906:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-063906",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7fd3a16a66bdbeba6c87d79b511cf042eabe7e264620059b02e9e6eb7699e39a",
	                "LowerDir": "/var/lib/docker/overlay2/1d49940d4132ab660b6b2352e8e0a94652374ad2d0095e27031e0f7b548a6c38-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1d49940d4132ab660b6b2352e8e0a94652374ad2d0095e27031e0f7b548a6c38/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1d49940d4132ab660b6b2352e8e0a94652374ad2d0095e27031e0f7b548a6c38/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1d49940d4132ab660b6b2352e8e0a94652374ad2d0095e27031e0f7b548a6c38/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-063906",
	                "Source": "/var/lib/docker/volumes/functional-063906/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-063906",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-063906",
	                "name.minikube.sigs.k8s.io": "functional-063906",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8d12cd688440491f95ea13bea63a1ebca1d9b518c824c66b6686b221737782c1",
	            "SandboxKey": "/var/run/docker/netns/8d12cd688440",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-063906": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:18:87:52:67:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4a7162b0b34475a67767f890488e7f75ede8370e514a73dae4c4c16e3ad6be2b",
	                    "EndpointID": "af408403e9ffe0ac8b55f8dad99a98938443a669ed1a97fdd7ecba1c65691d6e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-063906",
	                        "7fd3a16a66bd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-063906 -n functional-063906
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-063906 logs -n 25: (1.271274246s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-063906 ssh findmnt -T /mount1                                                                           │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │                     │
	│ mount          │ -p functional-063906 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3519995462/001:/mount1 --alsologtostderr -v=1 │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │                     │
	│ ssh            │ functional-063906 ssh findmnt -T /mount1                                                                           │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ ssh            │ functional-063906 ssh findmnt -T /mount2                                                                           │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ ssh            │ functional-063906 ssh findmnt -T /mount3                                                                           │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ mount          │ -p functional-063906 --kill=true                                                                                   │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │                     │
	│ start          │ -p functional-063906 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │                     │
	│ start          │ -p functional-063906 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │                     │
	│ start          │ -p functional-063906 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-063906 --alsologtostderr -v=1                                                     │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ image          │ functional-063906 image ls --format short --alsologtostderr                                                        │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ ssh            │ functional-063906 ssh pgrep buildkitd                                                                              │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │                     │
	│ image          │ functional-063906 image build -t localhost/my-image:functional-063906 testdata/build --alsologtostderr             │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ image          │ functional-063906 image ls                                                                                         │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ image          │ functional-063906 image ls --format yaml --alsologtostderr                                                         │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ image          │ functional-063906 image ls --format json --alsologtostderr                                                         │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ image          │ functional-063906 image ls --format table --alsologtostderr                                                        │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ update-context │ functional-063906 update-context --alsologtostderr -v=2                                                            │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ update-context │ functional-063906 update-context --alsologtostderr -v=2                                                            │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ update-context │ functional-063906 update-context --alsologtostderr -v=2                                                            │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:08 UTC │ 25 Oct 25 09:08 UTC │
	│ service        │ functional-063906 service list                                                                                     │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:17 UTC │ 25 Oct 25 09:17 UTC │
	│ service        │ functional-063906 service list -o json                                                                             │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:17 UTC │ 25 Oct 25 09:17 UTC │
	│ service        │ functional-063906 service --namespace=default --https --url hello-node                                             │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:17 UTC │                     │
	│ service        │ functional-063906 service hello-node --url --format={{.IP}}                                                        │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:17 UTC │                     │
	│ service        │ functional-063906 service hello-node --url                                                                         │ functional-063906 │ jenkins │ v1.37.0 │ 25 Oct 25 09:17 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:08:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:08:13.139123  173233 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:08:13.139408  173233 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:08:13.139423  173233 out.go:374] Setting ErrFile to fd 2...
	I1025 09:08:13.139429  173233 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:08:13.139738  173233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:08:13.140204  173233 out.go:368] Setting JSON to false
	I1025 09:08:13.141190  173233 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3037,"bootTime":1761380256,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:08:13.141279  173233 start.go:141] virtualization: kvm guest
	I1025 09:08:13.142857  173233 out.go:179] * [functional-063906] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:08:13.143883  173233 notify.go:220] Checking for updates...
	I1025 09:08:13.143889  173233 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:08:13.145026  173233 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:08:13.146117  173233 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:08:13.149827  173233 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:08:13.150979  173233 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:08:13.152083  173233 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:08:13.110213  173222 config.go:182] Loaded profile config "functional-063906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:08:13.110729  173222 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:08:13.135747  173222 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:08:13.135903  173222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:08:13.197159  173222 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-25 09:08:13.187340605 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:08:13.197301  173222 docker.go:318] overlay module found
	I1025 09:08:13.201767  173222 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1025 09:08:13.202889  173222 start.go:305] selected driver: docker
	I1025 09:08:13.202917  173222 start.go:925] validating driver "docker" against &{Name:functional-063906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-063906 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:08:13.203052  173222 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:08:13.205026  173222 out.go:203] 
	W1025 09:08:13.206088  173222 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 09:08:13.207115  173222 out.go:203] 
	I1025 09:08:13.153828  173233 config.go:182] Loaded profile config "functional-063906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:08:13.154489  173233 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:08:13.183417  173233 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:08:13.183533  173233 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:08:13.242419  173233 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-25 09:08:13.231860893 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:08:13.242535  173233 docker.go:318] overlay module found
	I1025 09:08:13.244087  173233 out.go:179] * Using the docker driver based on existing profile
	I1025 09:08:13.245192  173233 start.go:305] selected driver: docker
	I1025 09:08:13.245204  173233 start.go:925] validating driver "docker" against &{Name:functional-063906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-063906 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:08:13.245287  173233 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:08:13.245388  173233 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:08:13.316140  173233 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-25 09:08:13.30413982 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:08:13.317007  173233 cni.go:84] Creating CNI manager for ""
	I1025 09:08:13.317087  173233 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:08:13.317154  173233 start.go:349] cluster config:
	{Name:functional-063906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-063906 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:08:13.318751  173233 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 25 09:08:20 functional-063906 crio[3583]: time="2025-10-25T09:08:20.366219476Z" level=info msg="Started container" PID=8044 containerID=633b642e4bcc6277f41aa5aa37d20348a9ad36ec1034eaf637e15634047cb828 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5qhnt/kubernetes-dashboard id=0b6f41db-a700-4c3e-9b2b-778713042044 name=/runtime.v1.RuntimeService/StartContainer sandboxID=db74a7cc7941dbd16232ae483cecaa8a231309cdc634b5705d5e9a7de75cda86
	Oct 25 09:08:21 functional-063906 crio[3583]: time="2025-10-25T09:08:21.274030666Z" level=info msg="Stopping pod sandbox: d7117fd0b6a96e40ce3074ff0c6d34d714909926c4d039c0af3e18192376fde4" id=0449416c-fee7-47f7-92f7-ef7bfa91a60c name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:08:21 functional-063906 crio[3583]: time="2025-10-25T09:08:21.274098536Z" level=info msg="Stopped pod sandbox (already stopped): d7117fd0b6a96e40ce3074ff0c6d34d714909926c4d039c0af3e18192376fde4" id=0449416c-fee7-47f7-92f7-ef7bfa91a60c name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:08:21 functional-063906 crio[3583]: time="2025-10-25T09:08:21.274528344Z" level=info msg="Removing pod sandbox: d7117fd0b6a96e40ce3074ff0c6d34d714909926c4d039c0af3e18192376fde4" id=bc8d7121-26d0-452f-b5c3-0939d7626094 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:08:21 functional-063906 crio[3583]: time="2025-10-25T09:08:21.276957617Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:08:21 functional-063906 crio[3583]: time="2025-10-25T09:08:21.277024084Z" level=info msg="Removed pod sandbox: d7117fd0b6a96e40ce3074ff0c6d34d714909926c4d039c0af3e18192376fde4" id=bc8d7121-26d0-452f-b5c3-0939d7626094 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:08:21 functional-063906 crio[3583]: time="2025-10-25T09:08:21.277425471Z" level=info msg="Stopping pod sandbox: fcafa498324ccde18fe494455d671808f1abf046f99b47ab1725fe02cac195d9" id=7ab62dec-c57d-4ea3-8c5d-b29877037e6d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:08:21 functional-063906 crio[3583]: time="2025-10-25T09:08:21.277477923Z" level=info msg="Stopped pod sandbox (already stopped): fcafa498324ccde18fe494455d671808f1abf046f99b47ab1725fe02cac195d9" id=7ab62dec-c57d-4ea3-8c5d-b29877037e6d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:08:21 functional-063906 crio[3583]: time="2025-10-25T09:08:21.277780373Z" level=info msg="Removing pod sandbox: fcafa498324ccde18fe494455d671808f1abf046f99b47ab1725fe02cac195d9" id=fb49a17b-94d6-40f2-a01d-931332c0f9c4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:08:21 functional-063906 crio[3583]: time="2025-10-25T09:08:21.280135531Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:08:21 functional-063906 crio[3583]: time="2025-10-25T09:08:21.280182279Z" level=info msg="Removed pod sandbox: fcafa498324ccde18fe494455d671808f1abf046f99b47ab1725fe02cac195d9" id=fb49a17b-94d6-40f2-a01d-931332c0f9c4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:08:21 functional-063906 crio[3583]: time="2025-10-25T09:08:21.280574145Z" level=info msg="Stopping pod sandbox: 5bf8409d3e65cd29c2644f7a95affa12ba5d4ca8a99df030c7aafbc84bf35d56" id=775e626f-05fc-43b8-8568-5a70c13a15f0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:08:21 functional-063906 crio[3583]: time="2025-10-25T09:08:21.280622141Z" level=info msg="Stopped pod sandbox (already stopped): 5bf8409d3e65cd29c2644f7a95affa12ba5d4ca8a99df030c7aafbc84bf35d56" id=775e626f-05fc-43b8-8568-5a70c13a15f0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:08:21 functional-063906 crio[3583]: time="2025-10-25T09:08:21.280915429Z" level=info msg="Removing pod sandbox: 5bf8409d3e65cd29c2644f7a95affa12ba5d4ca8a99df030c7aafbc84bf35d56" id=2c47c971-4ee8-417d-b210-ea724492d645 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:08:21 functional-063906 crio[3583]: time="2025-10-25T09:08:21.283153307Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:08:21 functional-063906 crio[3583]: time="2025-10-25T09:08:21.283201827Z" level=info msg="Removed pod sandbox: 5bf8409d3e65cd29c2644f7a95affa12ba5d4ca8a99df030c7aafbc84bf35d56" id=2c47c971-4ee8-417d-b210-ea724492d645 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:08:25 functional-063906 crio[3583]: time="2025-10-25T09:08:25.269921355Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=17e21173-3dbd-4769-8b38-3223c702d4a6 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:08:31 functional-063906 crio[3583]: time="2025-10-25T09:08:31.270009742Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b50ef796-30f9-415d-8a02-d512273dcf8c name=/runtime.v1.ImageService/PullImage
	Oct 25 09:08:48 functional-063906 crio[3583]: time="2025-10-25T09:08:48.270467889Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7ba56046-ca0b-4349-8905-cf263f46a77c name=/runtime.v1.ImageService/PullImage
	Oct 25 09:09:24 functional-063906 crio[3583]: time="2025-10-25T09:09:24.270140497Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=302ca815-587c-4188-bb39-0183da42b3d5 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:09:29 functional-063906 crio[3583]: time="2025-10-25T09:09:29.269898864Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f9bd6038-a40a-446a-ba96-7f30a10f1840 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:10:46 functional-063906 crio[3583]: time="2025-10-25T09:10:46.269342365Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9db10a30-a37d-4fc6-a717-edee7d6aa65e name=/runtime.v1.ImageService/PullImage
	Oct 25 09:11:00 functional-063906 crio[3583]: time="2025-10-25T09:11:00.269936953Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=32ae5c0f-d012-4523-991c-d0e1ad91d691 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:13:33 functional-063906 crio[3583]: time="2025-10-25T09:13:33.27043171Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3e28917b-4ec5-4ec8-a838-2b4a2b1ed0d9 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:13:44 functional-063906 crio[3583]: time="2025-10-25T09:13:44.270089332Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=49a980e4-c2db-4efa-9bad-53cc13461b21 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	633b642e4bcc6       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   db74a7cc7941d       kubernetes-dashboard-855c9754f9-5qhnt        kubernetes-dashboard
	5dc45506e2cf7       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   51538d447374f       dashboard-metrics-scraper-77bf4d6c4c-jz8sc   kubernetes-dashboard
	0bd21f9f44ff5       docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8                  10 minutes ago      Running             myfrontend                  0                   394f8ecf82247       sp-pod                                       default
	c5dc9ca680f8f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              10 minutes ago      Exited              mount-munger                0                   d1829248c4120       busybox-mount                                default
	b2df6cc3e5de1       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  10 minutes ago      Running             mysql                       0                   f3d044f103ee4       mysql-5bb876957f-gmnks                       default
	8a6b70f453cca       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                  10 minutes ago      Running             nginx                       0                   2fd29c09f82ec       nginx-svc                                    default
	5b6e6ff941a29       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   605f71fd3e861       kube-apiserver-functional-063906             kube-system
	317faea540c26       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   928dfdc100219       kube-controller-manager-functional-063906    kube-system
	76cfa83543481       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 11 minutes ago      Exited              kube-controller-manager     1                   928dfdc100219       kube-controller-manager-functional-063906    kube-system
	b3a3fb84e6edd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Running             etcd                        1                   49b5b8a12086c       etcd-functional-063906                       kube-system
	b1f495361d78f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Running             kube-scheduler              1                   80e7d40c2ebf7       kube-scheduler-functional-063906             kube-system
	9d773c699eac8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Running             storage-provisioner         1                   4ba097941ca91       storage-provisioner                          kube-system
	53f2db98ef3d5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   454ebfd85b5fc       kindnet-dhtms                                kube-system
	0a45fce02de30       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   8f86b2d73efd2       kube-proxy-d9vhw                             kube-system
	2b8aa83a2f87a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   5135cabdce66e       coredns-66bc5c9577-mx9zs                     kube-system
	43cd441a5ebc2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   5135cabdce66e       coredns-66bc5c9577-mx9zs                     kube-system
	4193a5cd0c812       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   4ba097941ca91       storage-provisioner                          kube-system
	d44cfefc8ed2b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 12 minutes ago      Exited              kindnet-cni                 0                   454ebfd85b5fc       kindnet-dhtms                                kube-system
	88994630e8bed       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 12 minutes ago      Exited              kube-proxy                  0                   8f86b2d73efd2       kube-proxy-d9vhw                             kube-system
	23c4ee304f986       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 12 minutes ago      Exited              kube-scheduler              0                   80e7d40c2ebf7       kube-scheduler-functional-063906             kube-system
	21303466b69c5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 12 minutes ago      Exited              etcd                        0                   49b5b8a12086c       etcd-functional-063906                       kube-system
	
	
	==> coredns [2b8aa83a2f87a1418093cfdf456dc3ccca615991c1051d2b688000b647bad047] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44182 - 22527 "HINFO IN 9203770965371516362.4956510041038339420. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.07049936s
	
	
	==> coredns [43cd441a5ebc2b50bcbff33588b8626276c41c654951ba56741d3375ee796877] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52027 - 5966 "HINFO IN 7952496301841185764.2717899222427995194. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.080178834s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-063906
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-063906
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=functional-063906
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_06_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:06:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-063906
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:18:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:17:45 +0000   Sat, 25 Oct 2025 09:06:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:17:45 +0000   Sat, 25 Oct 2025 09:06:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:17:45 +0000   Sat, 25 Oct 2025 09:06:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:17:45 +0000   Sat, 25 Oct 2025 09:06:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-063906
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                c4f9a2a6-e610-4828-84a9-32ede5033273
	  Boot ID:                    69cac88c-fbae-449a-9884-8eb99653f5b9
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-d45tb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-hdrbz           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-gmnks                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-mx9zs                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-063906                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-dhtms                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-063906              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-063906     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-d9vhw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-063906              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-jz8sc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5qhnt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-063906 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-063906 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-063906 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-063906 event: Registered Node functional-063906 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-063906 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-063906 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-063906 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-063906 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-063906 event: Registered Node functional-063906 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 1c f5 68 9f 00 08 06
	[  +4.451388] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0e 07 4a e3 be 93 08 06
	[Oct25 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.025995] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.023888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.023905] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.024896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.022924] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +2.047850] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +4.031640] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +8.511323] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[ +16.382644] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[Oct25 09:03] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	
	
	==> etcd [21303466b69c5b6c8c69477a452d293512bb1d1c0de549f04e3ecb8318e80bb0] <==
	{"level":"warn","ts":"2025-10-25T09:06:04.176375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:06:04.183481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:06:04.192955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:06:04.200892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:06:04.221567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:06:04.227504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:06:04.233415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53308","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:07:01.260412Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-25T09:07:01.260500Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-063906","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-25T09:07:01.260595Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T09:07:08.261440Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T09:07:08.261530Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:07:08.261599Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-25T09:07:08.261600Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T09:07:08.261671Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T09:07:08.261683Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:07:08.261703Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-25T09:07:08.261670Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T09:07:08.261720Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-25T09:07:08.261719Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-10-25T09:07:08.261730Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:07:08.264076Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-25T09:07:08.264132Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:07:08.264162Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-25T09:07:08.264176Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-063906","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [b3a3fb84e6eddbdfc33a2d321350f662b1f6cc7be8a71bb7e6b3480700f2e392] <==
	{"level":"warn","ts":"2025-10-25T09:07:22.305149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.310945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.318124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.324004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.329829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.336959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.343959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.350152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.357075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.363132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.369334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.376465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.383752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.389592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.395724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.402742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.408889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.415682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.436859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.442962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.449040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:07:22.504389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37864","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:17:22.014389Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1140}
	{"level":"info","ts":"2025-10-25T09:17:22.034292Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1140,"took":"19.613656ms","hash":1272702074,"current-db-size-bytes":3489792,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1548288,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-25T09:17:22.034368Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1272702074,"revision":1140,"compact-revision":-1}
	
	
	==> kernel <==
	 09:18:15 up  1:00,  0 user,  load average: 0.17, 0.31, 0.80
	Linux functional-063906 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [53f2db98ef3d5ed1859dec8ab1661d68deb22c646b9ddafd1ee12d0ce876ecd2] <==
	I1025 09:16:11.857628       1 main.go:301] handling current node
	I1025 09:16:21.854749       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:16:21.854784       1 main.go:301] handling current node
	I1025 09:16:31.854053       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:16:31.854106       1 main.go:301] handling current node
	I1025 09:16:41.853253       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:16:41.853300       1 main.go:301] handling current node
	I1025 09:16:51.861324       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:16:51.861380       1 main.go:301] handling current node
	I1025 09:17:01.853035       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:17:01.853084       1 main.go:301] handling current node
	I1025 09:17:11.853650       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:17:11.853688       1 main.go:301] handling current node
	I1025 09:17:21.854056       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:17:21.854087       1 main.go:301] handling current node
	I1025 09:17:31.853105       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:17:31.853141       1 main.go:301] handling current node
	I1025 09:17:41.859043       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:17:41.859074       1 main.go:301] handling current node
	I1025 09:17:51.853939       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:17:51.854001       1 main.go:301] handling current node
	I1025 09:18:01.853405       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:18:01.853444       1 main.go:301] handling current node
	I1025 09:18:11.858692       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:18:11.858734       1 main.go:301] handling current node
	
	
	==> kindnet [d44cfefc8ed2bdd81c0abebc2b63883e8af126031bb93a652b8eff16d6a930f3] <==
	I1025 09:06:12.997467       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:06:12.997724       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1025 09:06:12.997880       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:06:12.997896       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:06:12.997918       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:06:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:06:13.196894       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:06:13.196929       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:06:13.196945       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:06:13.292221       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:06:13.597049       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:06:13.597072       1 metrics.go:72] Registering metrics
	I1025 09:06:13.597133       1 controller.go:711] "Syncing nftables rules"
	I1025 09:06:23.197766       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:06:23.197810       1 main.go:301] handling current node
	I1025 09:06:33.202743       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:06:33.202798       1 main.go:301] handling current node
	I1025 09:06:43.205893       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:06:43.205930       1 main.go:301] handling current node
	I1025 09:06:53.201563       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:06:53.201601       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5b6e6ff941a29a9b20094f92f688735b5220c212ec10548d525ae20ca595f942] <==
	I1025 09:07:22.999306       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:07:23.291422       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:07:23.874619       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1025 09:07:24.082413       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1025 09:07:24.083518       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:07:24.087455       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:07:24.614668       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:07:24.700308       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:07:24.747060       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:07:24.752204       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:07:26.756141       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:07:41.677849       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.12.166"}
	I1025 09:07:45.546041       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.236.2"}
	I1025 09:07:48.297261       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.211.194"}
	I1025 09:07:54.888260       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.138.84"}
	E1025 09:08:02.877373       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:52570: use of closed network connection
	E1025 09:08:10.027965       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:58408: use of closed network connection
	E1025 09:08:10.807477       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:58432: use of closed network connection
	E1025 09:08:13.035691       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:58458: use of closed network connection
	I1025 09:08:13.361476       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.171.240"}
	E1025 09:08:13.653862       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:58500: use of closed network connection
	I1025 09:08:14.208046       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:08:14.313515       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.90.115"}
	I1025 09:08:14.327436       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.46.230"}
	I1025 09:17:22.901498       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [317faea540c2640a8be085a5cbc6cb694ca1b867c7aa33cfaff2d5fa3a772b60] <==
	I1025 09:07:26.329175       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 09:07:26.330249       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:07:26.351474       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:07:26.351493       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:07:26.351540       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:07:26.351655       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:07:26.351672       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:07:26.351683       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:07:26.351686       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:07:26.351842       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:07:26.352016       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:07:26.352193       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 09:07:26.353986       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 09:07:26.354605       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:07:26.355136       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:07:26.357293       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 09:07:26.359085       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:07:26.366333       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:07:26.370054       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1025 09:08:14.256443       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:08:14.260803       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:08:14.263048       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:08:14.265195       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:08:14.266646       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:08:14.271517       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [76cfa8354348168d0f543a544d83a3bb1ddbdb8a8ceb70daffa6504e2de7ff22] <==
	I1025 09:07:10.607622       1 serving.go:386] Generated self-signed cert in-memory
	I1025 09:07:10.878322       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1025 09:07:10.878354       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:07:10.879950       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1025 09:07:10.879958       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1025 09:07:10.880321       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1025 09:07:10.880412       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 09:07:20.882268       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [0a45fce02de3068780d42d095750de69ed287feb856b8b4802b29473fc4a8e91] <==
	I1025 09:07:01.492703       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:07:01.567549       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:07:01.668095       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:07:01.668151       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:07:01.668242       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:07:01.687109       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:07:01.687156       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:07:01.692558       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:07:01.692916       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:07:01.692948       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:07:01.694380       1 config.go:200] "Starting service config controller"
	I1025 09:07:01.694401       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:07:01.694457       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:07:01.694468       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:07:01.694504       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:07:01.694517       1 config.go:309] "Starting node config controller"
	I1025 09:07:01.694523       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:07:01.694527       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:07:01.694534       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:07:01.794472       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:07:01.794530       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:07:01.794564       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [88994630e8bed5eeb2deac7db5323f3dcd76d287ac673a865a5d9b685cb5fa13] <==
	I1025 09:06:12.843656       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:06:12.914955       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:06:13.015459       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:06:13.015498       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:06:13.015624       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:06:13.033190       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:06:13.033235       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:06:13.038149       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:06:13.038492       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:06:13.038527       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:06:13.039874       1 config.go:200] "Starting service config controller"
	I1025 09:06:13.039896       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:06:13.039901       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:06:13.039912       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:06:13.039872       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:06:13.039962       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:06:13.039985       1 config.go:309] "Starting node config controller"
	I1025 09:06:13.039991       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:06:13.039998       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:06:13.141076       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:06:13.141090       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:06:13.141143       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [23c4ee304f9861025f15fb91e37da5e6dcf05e76ee497e704317d7dd8b11dd84] <==
	E1025 09:06:04.824664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:06:04.824681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:06:04.824732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:06:04.824749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:06:04.824754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:06:04.824879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:06:04.824965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:06:04.824965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:06:04.824995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:06:04.825110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:06:04.825110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:06:04.825110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:06:05.633670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:06:05.770942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:06:05.791965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:06:05.830670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:06:05.883736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:06:05.928045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1025 09:06:06.417842       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:07:01.151705       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:07:01.151745       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1025 09:07:01.151882       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1025 09:07:01.151894       1 server.go:265] "[graceful-termination] secure server is exiting"
	I1025 09:07:01.151812       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1025 09:07:01.151918       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b1f495361d78ff0fe96deb9acc2a57ac451af37aebf3ce37bb7df2fab71ec386] <==
	E1025 09:07:15.807312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1025 09:07:18.004019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:07:18.255708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:07:18.344956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:07:18.581977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:07:18.596514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:07:18.747973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:07:19.046992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:07:19.055575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:07:19.203512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:07:19.213965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:07:19.731918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:07:19.886474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:07:19.967510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:07:20.290702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1025 09:07:20.444227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:07:20.558442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:07:21.068154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:07:21.271580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:07:21.329158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1025 09:07:21.330455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:07:21.493998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1025 09:07:30.935955       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:07:31.936442       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1025 09:07:33.636422       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:15:30 functional-063906 kubelet[4306]: E1025 09:15:30.269652    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hdrbz" podUID="e34b122f-285e-4d2f-8a5c-0d7b0abc50d2"
	Oct 25 09:15:37 functional-063906 kubelet[4306]: E1025 09:15:37.270051    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d45tb" podUID="99ea5400-18b3-48d3-a7af-c27d277d8511"
	Oct 25 09:15:43 functional-063906 kubelet[4306]: E1025 09:15:43.269162    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hdrbz" podUID="e34b122f-285e-4d2f-8a5c-0d7b0abc50d2"
	Oct 25 09:15:49 functional-063906 kubelet[4306]: E1025 09:15:49.269540    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d45tb" podUID="99ea5400-18b3-48d3-a7af-c27d277d8511"
	Oct 25 09:15:58 functional-063906 kubelet[4306]: E1025 09:15:58.269760    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hdrbz" podUID="e34b122f-285e-4d2f-8a5c-0d7b0abc50d2"
	Oct 25 09:16:03 functional-063906 kubelet[4306]: E1025 09:16:03.269476    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d45tb" podUID="99ea5400-18b3-48d3-a7af-c27d277d8511"
	Oct 25 09:16:12 functional-063906 kubelet[4306]: E1025 09:16:12.269982    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hdrbz" podUID="e34b122f-285e-4d2f-8a5c-0d7b0abc50d2"
	Oct 25 09:16:18 functional-063906 kubelet[4306]: E1025 09:16:18.269512    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d45tb" podUID="99ea5400-18b3-48d3-a7af-c27d277d8511"
	Oct 25 09:16:25 functional-063906 kubelet[4306]: E1025 09:16:25.269419    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hdrbz" podUID="e34b122f-285e-4d2f-8a5c-0d7b0abc50d2"
	Oct 25 09:16:30 functional-063906 kubelet[4306]: E1025 09:16:30.269175    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d45tb" podUID="99ea5400-18b3-48d3-a7af-c27d277d8511"
	Oct 25 09:16:38 functional-063906 kubelet[4306]: E1025 09:16:38.269549    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hdrbz" podUID="e34b122f-285e-4d2f-8a5c-0d7b0abc50d2"
	Oct 25 09:16:42 functional-063906 kubelet[4306]: E1025 09:16:42.269898    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d45tb" podUID="99ea5400-18b3-48d3-a7af-c27d277d8511"
	Oct 25 09:16:49 functional-063906 kubelet[4306]: E1025 09:16:49.269683    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hdrbz" podUID="e34b122f-285e-4d2f-8a5c-0d7b0abc50d2"
	Oct 25 09:16:57 functional-063906 kubelet[4306]: E1025 09:16:57.269629    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d45tb" podUID="99ea5400-18b3-48d3-a7af-c27d277d8511"
	Oct 25 09:17:02 functional-063906 kubelet[4306]: E1025 09:17:02.269715    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hdrbz" podUID="e34b122f-285e-4d2f-8a5c-0d7b0abc50d2"
	Oct 25 09:17:10 functional-063906 kubelet[4306]: E1025 09:17:10.269183    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d45tb" podUID="99ea5400-18b3-48d3-a7af-c27d277d8511"
	Oct 25 09:17:16 functional-063906 kubelet[4306]: E1025 09:17:16.269665    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hdrbz" podUID="e34b122f-285e-4d2f-8a5c-0d7b0abc50d2"
	Oct 25 09:17:25 functional-063906 kubelet[4306]: E1025 09:17:25.269388    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d45tb" podUID="99ea5400-18b3-48d3-a7af-c27d277d8511"
	Oct 25 09:17:30 functional-063906 kubelet[4306]: E1025 09:17:30.269835    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hdrbz" podUID="e34b122f-285e-4d2f-8a5c-0d7b0abc50d2"
	Oct 25 09:17:40 functional-063906 kubelet[4306]: E1025 09:17:40.269494    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d45tb" podUID="99ea5400-18b3-48d3-a7af-c27d277d8511"
	Oct 25 09:17:45 functional-063906 kubelet[4306]: E1025 09:17:45.269473    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hdrbz" podUID="e34b122f-285e-4d2f-8a5c-0d7b0abc50d2"
	Oct 25 09:17:54 functional-063906 kubelet[4306]: E1025 09:17:54.269425    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d45tb" podUID="99ea5400-18b3-48d3-a7af-c27d277d8511"
	Oct 25 09:17:56 functional-063906 kubelet[4306]: E1025 09:17:56.269146    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hdrbz" podUID="e34b122f-285e-4d2f-8a5c-0d7b0abc50d2"
	Oct 25 09:18:06 functional-063906 kubelet[4306]: E1025 09:18:06.269391    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d45tb" podUID="99ea5400-18b3-48d3-a7af-c27d277d8511"
	Oct 25 09:18:10 functional-063906 kubelet[4306]: E1025 09:18:10.269424    4306 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hdrbz" podUID="e34b122f-285e-4d2f-8a5c-0d7b0abc50d2"
	
	
	==> kubernetes-dashboard [633b642e4bcc6277f41aa5aa37d20348a9ad36ec1034eaf637e15634047cb828] <==
	2025/10/25 09:08:20 Starting overwatch
	2025/10/25 09:08:20 Using namespace: kubernetes-dashboard
	2025/10/25 09:08:20 Using in-cluster config to connect to apiserver
	2025/10/25 09:08:20 Using secret token for csrf signing
	2025/10/25 09:08:20 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:08:20 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:08:20 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 09:08:20 Generating JWE encryption key
	2025/10/25 09:08:20 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:08:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:08:20 Initializing JWE encryption key from synchronized object
	2025/10/25 09:08:20 Creating in-cluster Sidecar client
	2025/10/25 09:08:20 Successful request to sidecar
	2025/10/25 09:08:20 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [4193a5cd0c812dcb0293bf78b6ce14f01d352df758ae6b9639a0b52b3e9226f1] <==
	W1025 09:06:35.746891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:37.749479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:37.752830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:39.755371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:39.759093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:41.761555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:41.766050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:43.769611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:43.773295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:45.776104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:45.780904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:47.784309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:47.788315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:49.792038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:49.797451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:51.800078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:51.804383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:53.807710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:53.811371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:55.814745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:55.818585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:57.821484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:57.825759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:59.828925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:06:59.832839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9d773c699eac8ead9c2e0216832ca2b677a36c8ff7cf12eefe4b4008e268a1b9] <==
	W1025 09:17:50.535195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:17:52.537998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:17:52.541459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:17:54.544507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:17:54.548776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:17:56.551774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:17:56.555457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:17:58.558511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:17:58.562292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:18:00.565572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:18:00.569206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:18:02.572109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:18:02.575849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:18:04.579441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:18:04.584098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:18:06.587159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:18:06.591039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:18:08.593839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:18:08.598679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:18:10.601737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:18:10.605385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:18:12.608571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:18:12.612283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:18:14.615428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:18:14.620656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-063906 -n functional-063906
helpers_test.go:269: (dbg) Run:  kubectl --context functional-063906 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-d45tb hello-node-connect-7d85dfc575-hdrbz
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-063906 describe pod busybox-mount hello-node-75c85bcc94-d45tb hello-node-connect-7d85dfc575-hdrbz
helpers_test.go:290: (dbg) kubectl --context functional-063906 describe pod busybox-mount hello-node-75c85bcc94-d45tb hello-node-connect-7d85dfc575-hdrbz:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-063906/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:08:03 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://c5dc9ca680f8f9feea0b3001e8550676a39109355a2f12e803c204aeaf4b19ea
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 25 Oct 2025 09:08:05 +0000
	      Finished:     Sat, 25 Oct 2025 09:08:05 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8crmf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-8crmf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-063906
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.007s (2.007s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-d45tb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-063906/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:07:45 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8lz9b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8lz9b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-d45tb to functional-063906
	  Normal   Pulling    7m30s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m30s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m30s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    22s (x43 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     22s (x43 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-hdrbz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-063906/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:08:13 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w9lxl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-w9lxl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hdrbz to functional-063906
	  Normal   Pulling    7m16s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m16s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m16s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m57s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m57s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-063906 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-063906 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-d45tb" [99ea5400-18b3-48d3-a7af-c27d277d8511] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-063906 -n functional-063906
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-25 09:17:45.882032494 +0000 UTC m=+1128.686204853
functional_test.go:1460: (dbg) Run:  kubectl --context functional-063906 describe po hello-node-75c85bcc94-d45tb -n default
functional_test.go:1460: (dbg) kubectl --context functional-063906 describe po hello-node-75c85bcc94-d45tb -n default:
Name:             hello-node-75c85bcc94-d45tb
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-063906/192.168.49.2
Start Time:       Sat, 25 Oct 2025 09:07:45 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8lz9b (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8lz9b:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-d45tb to functional-063906
Normal   Pulling    6m59s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m59s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m59s (x5 over 10m)     kubelet            Error: ErrImagePull
Normal   BackOff    4m53s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m53s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-063906 logs hello-node-75c85bcc94-d45tb -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-063906 logs hello-node-75c85bcc94-d45tb -n default: exit status 1 (69.73929ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-d45tb" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-063906 logs hello-node-75c85bcc94-d45tb -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 image load --daemon kicbase/echo-server:functional-063906 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-063906" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 image load --daemon kicbase/echo-server:functional-063906 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-063906" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-063906
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 image load --daemon kicbase/echo-server:functional-063906 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-063906" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 image save kicbase/echo-server:functional-063906 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1025 09:07:52.178657  168323 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:07:52.178786  168323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:07:52.178795  168323 out.go:374] Setting ErrFile to fd 2...
	I1025 09:07:52.178799  168323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:07:52.178992  168323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:07:52.179582  168323 config.go:182] Loaded profile config "functional-063906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:07:52.179667  168323 config.go:182] Loaded profile config "functional-063906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:07:52.180033  168323 cli_runner.go:164] Run: docker container inspect functional-063906 --format={{.State.Status}}
	I1025 09:07:52.198042  168323 ssh_runner.go:195] Run: systemctl --version
	I1025 09:07:52.198091  168323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-063906
	I1025 09:07:52.216254  168323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/functional-063906/id_rsa Username:docker}
	I1025 09:07:52.315070  168323 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1025 09:07:52.315134  168323 cache_images.go:254] Failed to load cached images for "functional-063906": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1025 09:07:52.315167  168323 cache_images.go:266] failed pushing to: functional-063906

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-063906
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 image save --daemon kicbase/echo-server:functional-063906 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-063906
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-063906: exit status 1 (19.422583ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-063906

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-063906

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-063906 service --namespace=default --https --url hello-node: exit status 115 (541.790235ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30783
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-063906 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-063906 service hello-node --url --format={{.IP}}: exit status 115 (535.141478ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-063906 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-063906 service hello-node --url: exit status 115 (537.015204ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30783
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-063906 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30783
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.23s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-179550 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-179550 --output=json --user=testUser: exit status 80 (2.227021465s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b41a31d7-0ac7-4beb-94d9-053b8cd5e387","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-179550 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"39ae86bf-7992-4125-9b4d-5a24e36e6d12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-25T09:27:33Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"cf871ecf-9436-47b6-821f-19ef6009fb93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-179550 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.23s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.76s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-179550 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-179550 --output=json --user=testUser: exit status 80 (1.763092841s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2ce172cf-2e5e-4411-b11b-9b8959f6cf61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-179550 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"65f1703f-ff15-4d81-ae38-1da59e3baffb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-25T09:27:35Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"7b727fd7-1a01-49bc-8c01-1833483298da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-179550 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.76s)

                                                
                                    
x
+
TestPreload (437.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-220541 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1025 09:36:52.510632  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-220541 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (48.48167741s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-220541 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-220541 image pull gcr.io/k8s-minikube/busybox: (2.253579906s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-220541
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-220541: (5.846376428s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-220541 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1025 09:37:45.554015  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:39:08.621065  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:39:55.577208  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:52.509915  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:42:45.552529  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p test-preload-220541 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: exit status 80 (6m17.651117006s)

                                                
                                                
-- stdout --
	* [test-preload-220541] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21794
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	* Using the docker driver based on existing profile
	* Starting "test-preload-220541" primary control-plane node in "test-preload-220541" cluster
	* Pulling base image v0.0.48-1760939008-21773 ...
	* Downloading Kubernetes v1.32.0 preload ...
	* Preparing Kubernetes v1.32.0 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:37:36.246984  294802 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:37:36.247236  294802 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:37:36.247245  294802 out.go:374] Setting ErrFile to fd 2...
	I1025 09:37:36.247250  294802 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:37:36.247458  294802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:37:36.247885  294802 out.go:368] Setting JSON to false
	I1025 09:37:36.248798  294802 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4800,"bootTime":1761380256,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:37:36.248850  294802 start.go:141] virtualization: kvm guest
	I1025 09:37:36.250686  294802 out.go:179] * [test-preload-220541] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:37:36.251854  294802 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:37:36.251865  294802 notify.go:220] Checking for updates...
	I1025 09:37:36.254818  294802 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:37:36.255804  294802 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:37:36.256870  294802 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:37:36.257913  294802 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:37:36.258942  294802 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:37:36.260259  294802 config.go:182] Loaded profile config "test-preload-220541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1025 09:37:36.261650  294802 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1025 09:37:36.262598  294802 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:37:36.286418  294802 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:37:36.286525  294802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:37:36.343064  294802 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-25 09:37:36.333052429 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:37:36.343176  294802 docker.go:318] overlay module found
	I1025 09:37:36.344754  294802 out.go:179] * Using the docker driver based on existing profile
	I1025 09:37:36.345820  294802 start.go:305] selected driver: docker
	I1025 09:37:36.345831  294802 start.go:925] validating driver "docker" against &{Name:test-preload-220541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-220541 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:37:36.345913  294802 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:37:36.346485  294802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:37:36.402169  294802 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-25 09:37:36.392733407 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:37:36.402592  294802 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:37:36.402622  294802 cni.go:84] Creating CNI manager for ""
	I1025 09:37:36.402675  294802 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:37:36.402712  294802 start.go:349] cluster config:
	{Name:test-preload-220541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-220541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:37:36.404405  294802 out.go:179] * Starting "test-preload-220541" primary control-plane node in "test-preload-220541" cluster
	I1025 09:37:36.405534  294802 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:37:36.406661  294802 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:37:36.407772  294802 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1025 09:37:36.407811  294802 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:37:36.427956  294802 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:37:36.427975  294802 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:37:36.752906  294802 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1025 09:37:36.752949  294802 cache.go:58] Caching tarball of preloaded images
	I1025 09:37:36.753122  294802 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1025 09:37:36.754854  294802 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1025 09:37:36.756133  294802 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1025 09:37:36.860713  294802 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1025 09:37:36.860758  294802 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1025 09:37:47.122566  294802 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1025 09:37:47.122760  294802 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/config.json ...
	I1025 09:37:47.123006  294802 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:37:47.123042  294802 start.go:360] acquireMachinesLock for test-preload-220541: {Name:mkbfcb547a313f19c64c690f47ad8dc7ad9c5624 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:37:47.123148  294802 start.go:364] duration metric: took 74.803µs to acquireMachinesLock for "test-preload-220541"
	I1025 09:37:47.123176  294802 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:37:47.123188  294802 fix.go:54] fixHost starting: 
	I1025 09:37:47.123469  294802 cli_runner.go:164] Run: docker container inspect test-preload-220541 --format={{.State.Status}}
	I1025 09:37:47.140961  294802 fix.go:112] recreateIfNeeded on test-preload-220541: state=Stopped err=<nil>
	W1025 09:37:47.140992  294802 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 09:37:47.142660  294802 out.go:252] * Restarting existing docker container for "test-preload-220541" ...
	I1025 09:37:47.142737  294802 cli_runner.go:164] Run: docker start test-preload-220541
	I1025 09:37:47.368966  294802 cli_runner.go:164] Run: docker container inspect test-preload-220541 --format={{.State.Status}}
	I1025 09:37:47.387025  294802 kic.go:430] container "test-preload-220541" state is running.
	I1025 09:37:47.387433  294802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-220541
	I1025 09:37:47.404916  294802 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/config.json ...
	I1025 09:37:47.405123  294802 machine.go:93] provisionDockerMachine start ...
	I1025 09:37:47.405193  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:47.423737  294802 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:47.424008  294802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1025 09:37:47.424022  294802 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:37:47.424738  294802 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52452->127.0.0.1:33078: read: connection reset by peer
	I1025 09:37:50.566441  294802 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-220541
	
	I1025 09:37:50.566474  294802 ubuntu.go:182] provisioning hostname "test-preload-220541"
	I1025 09:37:50.566535  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:50.583943  294802 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:50.584153  294802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1025 09:37:50.584166  294802 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-220541 && echo "test-preload-220541" | sudo tee /etc/hostname
	I1025 09:37:50.734081  294802 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-220541
	
	I1025 09:37:50.734158  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:50.752940  294802 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:50.753181  294802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1025 09:37:50.753207  294802 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-220541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-220541/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-220541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:37:50.893088  294802 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:37:50.893126  294802 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 09:37:50.893176  294802 ubuntu.go:190] setting up certificates
	I1025 09:37:50.893186  294802 provision.go:84] configureAuth start
	I1025 09:37:50.893240  294802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-220541
	I1025 09:37:50.910777  294802 provision.go:143] copyHostCerts
	I1025 09:37:50.910853  294802 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem, removing ...
	I1025 09:37:50.910877  294802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:37:50.910959  294802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 09:37:50.911106  294802 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem, removing ...
	I1025 09:37:50.911119  294802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:37:50.911164  294802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 09:37:50.911262  294802 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem, removing ...
	I1025 09:37:50.911273  294802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:37:50.911310  294802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 09:37:50.911417  294802 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.test-preload-220541 san=[127.0.0.1 192.168.76.2 localhost minikube test-preload-220541]
	I1025 09:37:51.005753  294802 provision.go:177] copyRemoteCerts
	I1025 09:37:51.005825  294802 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:37:51.005864  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:51.024019  294802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/test-preload-220541/id_rsa Username:docker}
	I1025 09:37:51.123542  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:37:51.141015  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 09:37:51.158033  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:37:51.174958  294802 provision.go:87] duration metric: took 281.746294ms to configureAuth
	I1025 09:37:51.174987  294802 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:37:51.175186  294802 config.go:182] Loaded profile config "test-preload-220541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1025 09:37:51.175326  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:51.192152  294802 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:51.192440  294802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1025 09:37:51.192466  294802 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:37:51.463562  294802 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:37:51.463595  294802 machine.go:96] duration metric: took 4.058458222s to provisionDockerMachine
	I1025 09:37:51.463612  294802 start.go:293] postStartSetup for "test-preload-220541" (driver="docker")
	I1025 09:37:51.463626  294802 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:37:51.463705  294802 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:37:51.463769  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:51.481432  294802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/test-preload-220541/id_rsa Username:docker}
	I1025 09:37:51.580717  294802 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:37:51.584212  294802 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:37:51.584239  294802 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:37:51.584248  294802 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 09:37:51.584308  294802 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 09:37:51.584429  294802 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> 1341452.pem in /etc/ssl/certs
	I1025 09:37:51.584548  294802 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:37:51.592003  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:37:51.609394  294802 start.go:296] duration metric: took 145.765255ms for postStartSetup
	I1025 09:37:51.609487  294802 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:37:51.609536  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:51.627259  294802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/test-preload-220541/id_rsa Username:docker}
	I1025 09:37:51.723519  294802 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:37:51.728058  294802 fix.go:56] duration metric: took 4.604861843s for fixHost
	I1025 09:37:51.728083  294802 start.go:83] releasing machines lock for "test-preload-220541", held for 4.604917827s
	I1025 09:37:51.728163  294802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-220541
	I1025 09:37:51.745580  294802 ssh_runner.go:195] Run: cat /version.json
	I1025 09:37:51.745617  294802 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:37:51.745627  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:51.745693  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:51.764037  294802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/test-preload-220541/id_rsa Username:docker}
	I1025 09:37:51.764383  294802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/test-preload-220541/id_rsa Username:docker}
	I1025 09:37:51.911767  294802 ssh_runner.go:195] Run: systemctl --version
	I1025 09:37:51.918202  294802 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:37:51.952221  294802 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:37:51.956955  294802 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:37:51.957024  294802 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:37:51.965310  294802 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:37:51.965341  294802 start.go:495] detecting cgroup driver to use...
	I1025 09:37:51.965390  294802 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:37:51.965442  294802 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:37:51.979148  294802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:37:51.991138  294802 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:37:51.991206  294802 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:37:52.005171  294802 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:37:52.017390  294802 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:37:52.097007  294802 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:37:52.176743  294802 docker.go:234] disabling docker service ...
	I1025 09:37:52.176803  294802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:37:52.190792  294802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:37:52.202900  294802 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:37:52.284179  294802 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:37:52.363042  294802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:37:52.375930  294802 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:37:52.389686  294802 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1025 09:37:52.389738  294802 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:52.398482  294802 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:37:52.398542  294802 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:52.407304  294802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:52.416159  294802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:52.424588  294802 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:37:52.432477  294802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:52.441087  294802 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:52.449245  294802 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:52.457702  294802 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:37:52.464864  294802 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:37:52.471924  294802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:37:52.548424  294802 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:37:52.654917  294802 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:37:52.654974  294802 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:37:52.658931  294802 start.go:563] Will wait 60s for crictl version
	I1025 09:37:52.658998  294802 ssh_runner.go:195] Run: which crictl
	I1025 09:37:52.662324  294802 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:37:52.686626  294802 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:37:52.686701  294802 ssh_runner.go:195] Run: crio --version
	I1025 09:37:52.713118  294802 ssh_runner.go:195] Run: crio --version
	I1025 09:37:52.741734  294802 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.34.1 ...
	I1025 09:37:52.743115  294802 cli_runner.go:164] Run: docker network inspect test-preload-220541 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:37:52.760167  294802 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 09:37:52.764127  294802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:37:52.774274  294802 kubeadm.go:883] updating cluster {Name:test-preload-220541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-220541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:37:52.774429  294802 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1025 09:37:52.774495  294802 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:37:52.804631  294802 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:37:52.804653  294802 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:37:52.804700  294802 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:37:52.829281  294802 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:37:52.829307  294802 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:37:52.829316  294802 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.0 crio true true} ...
	I1025 09:37:52.829448  294802 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=test-preload-220541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-220541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:37:52.829533  294802 ssh_runner.go:195] Run: crio config
	I1025 09:37:52.875121  294802 cni.go:84] Creating CNI manager for ""
	I1025 09:37:52.875142  294802 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:37:52.875160  294802 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:37:52.875182  294802 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-220541 NodeName:test-preload-220541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:37:52.875304  294802 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-220541"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:37:52.875385  294802 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1025 09:37:52.883544  294802 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:37:52.883646  294802 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:37:52.891313  294802 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1025 09:37:52.904034  294802 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:37:52.916557  294802 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1025 09:37:52.928841  294802 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:37:52.932557  294802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:37:52.942273  294802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:37:53.020284  294802 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:37:53.043691  294802 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541 for IP: 192.168.76.2
	I1025 09:37:53.043712  294802 certs.go:195] generating shared ca certs ...
	I1025 09:37:53.043739  294802 certs.go:227] acquiring lock for ca certs: {Name:mk84f00dc0ba6e3a6eb84ff47b0ea60692217fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:53.043930  294802 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key
	I1025 09:37:53.044000  294802 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key
	I1025 09:37:53.044020  294802 certs.go:257] generating profile certs ...
	I1025 09:37:53.044120  294802 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/client.key
	I1025 09:37:53.044194  294802 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/apiserver.key.87d9b87c
	I1025 09:37:53.044249  294802 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/proxy-client.key
	I1025 09:37:53.044412  294802 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem (1338 bytes)
	W1025 09:37:53.044451  294802 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145_empty.pem, impossibly tiny 0 bytes
	I1025 09:37:53.044463  294802 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:37:53.044490  294802 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:37:53.044519  294802 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:37:53.044548  294802 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem (1675 bytes)
	I1025 09:37:53.044602  294802 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:37:53.045398  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:37:53.064105  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:37:53.083041  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:37:53.102395  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:37:53.126553  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 09:37:53.144341  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:37:53.161323  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:37:53.178078  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:37:53.194573  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:37:53.211542  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem --> /usr/share/ca-certificates/134145.pem (1338 bytes)
	I1025 09:37:53.228290  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /usr/share/ca-certificates/1341452.pem (1708 bytes)
	I1025 09:37:53.246330  294802 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:37:53.258308  294802 ssh_runner.go:195] Run: openssl version
	I1025 09:37:53.264182  294802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:37:53.272475  294802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:37:53.276048  294802 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:59 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:37:53.276097  294802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:37:53.310040  294802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:37:53.318399  294802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134145.pem && ln -fs /usr/share/ca-certificates/134145.pem /etc/ssl/certs/134145.pem"
	I1025 09:37:53.326651  294802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134145.pem
	I1025 09:37:53.330462  294802 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:05 /usr/share/ca-certificates/134145.pem
	I1025 09:37:53.330520  294802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134145.pem
	I1025 09:37:53.364139  294802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134145.pem /etc/ssl/certs/51391683.0"
	I1025 09:37:53.372476  294802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1341452.pem && ln -fs /usr/share/ca-certificates/1341452.pem /etc/ssl/certs/1341452.pem"
	I1025 09:37:53.380797  294802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1341452.pem
	I1025 09:37:53.384664  294802 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:05 /usr/share/ca-certificates/1341452.pem
	I1025 09:37:53.384729  294802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1341452.pem
	I1025 09:37:53.419665  294802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1341452.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:37:53.428321  294802 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:37:53.432300  294802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:37:53.466429  294802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:37:53.500094  294802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:37:53.533944  294802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:37:53.577251  294802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:37:53.619160  294802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:37:53.660478  294802 kubeadm.go:400] StartCluster: {Name:test-preload-220541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-220541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:37:53.660578  294802 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:37:53.660631  294802 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:37:53.687677  294802 cri.go:89] found id: ""
	I1025 09:37:53.687746  294802 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:37:53.695837  294802 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:37:53.695857  294802 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:37:53.695926  294802 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:37:53.703032  294802 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:37:53.703452  294802 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-220541" does not appear in /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:37:53.703571  294802 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-130604/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-220541" cluster setting kubeconfig missing "test-preload-220541" context setting]
	I1025 09:37:53.703861  294802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:53.704316  294802 kapi.go:59] client config for test-preload-220541: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/client.crt", KeyFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/client.key", CAFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:37:53.704714  294802 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 09:37:53.704727  294802 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1025 09:37:53.704732  294802 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1025 09:37:53.704735  294802 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 09:37:53.704742  294802 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1025 09:37:53.705067  294802 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:37:53.712285  294802 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1025 09:37:53.712316  294802 kubeadm.go:601] duration metric: took 16.45319ms to restartPrimaryControlPlane
	I1025 09:37:53.712326  294802 kubeadm.go:402] duration metric: took 51.858815ms to StartCluster
	I1025 09:37:53.712354  294802 settings.go:142] acquiring lock: {Name:mke1e64be0ec6edf2eef6e52eb10d83b59bb8c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:53.712421  294802 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:37:53.712996  294802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:53.713192  294802 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:37:53.713245  294802 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:37:53.713355  294802 addons.go:69] Setting storage-provisioner=true in profile "test-preload-220541"
	I1025 09:37:53.713369  294802 config.go:182] Loaded profile config "test-preload-220541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1025 09:37:53.713374  294802 addons.go:238] Setting addon storage-provisioner=true in "test-preload-220541"
	W1025 09:37:53.713417  294802 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:37:53.713444  294802 host.go:66] Checking if "test-preload-220541" exists ...
	I1025 09:37:53.713374  294802 addons.go:69] Setting default-storageclass=true in profile "test-preload-220541"
	I1025 09:37:53.713498  294802 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-220541"
	I1025 09:37:53.713750  294802 cli_runner.go:164] Run: docker container inspect test-preload-220541 --format={{.State.Status}}
	I1025 09:37:53.713818  294802 cli_runner.go:164] Run: docker container inspect test-preload-220541 --format={{.State.Status}}
	I1025 09:37:53.716526  294802 out.go:179] * Verifying Kubernetes components...
	I1025 09:37:53.717665  294802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:37:53.733508  294802 kapi.go:59] client config for test-preload-220541: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/client.crt", KeyFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/client.key", CAFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:37:53.733750  294802 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:37:53.733907  294802 addons.go:238] Setting addon default-storageclass=true in "test-preload-220541"
	W1025 09:37:53.733927  294802 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:37:53.733959  294802 host.go:66] Checking if "test-preload-220541" exists ...
	I1025 09:37:53.734443  294802 cli_runner.go:164] Run: docker container inspect test-preload-220541 --format={{.State.Status}}
	I1025 09:37:53.735399  294802 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:37:53.735424  294802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:37:53.735486  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:53.760642  294802 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:37:53.760683  294802 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:37:53.760754  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:53.761518  294802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/test-preload-220541/id_rsa Username:docker}
	I1025 09:37:53.782688  294802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/test-preload-220541/id_rsa Username:docker}
	I1025 09:37:53.817906  294802 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:37:53.830735  294802 node_ready.go:35] waiting up to 6m0s for node "test-preload-220541" to be "Ready" ...
	I1025 09:37:53.869673  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:37:53.888423  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:37:53.926890  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:53.926938  294802 retry.go:31] will retry after 229.977996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:37:53.942698  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:53.942730  294802 retry.go:31] will retry after 317.889005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.158112  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:37:54.212204  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.212242  294802 retry.go:31] will retry after 262.272462ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.261467  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:37:54.316490  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.316524  294802 retry.go:31] will retry after 273.444312ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.475366  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:37:54.528886  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.528914  294802 retry.go:31] will retry after 340.586386ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.591148  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:37:54.647387  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.647418  294802 retry.go:31] will retry after 349.59827ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.870466  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:37:54.925561  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.925596  294802 retry.go:31] will retry after 426.813175ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.997898  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:37:55.052701  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:55.052735  294802 retry.go:31] will retry after 482.685044ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:55.352591  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:37:55.407065  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:55.407099  294802 retry.go:31] will retry after 1.31010726s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:55.536373  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:37:55.590488  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:55.590522  294802 retry.go:31] will retry after 1.057724306s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:37:55.832339  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:37:56.648606  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:37:56.701016  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:56.701056  294802 retry.go:31] will retry after 1.452130445s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:56.718235  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:37:56.772318  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:56.772361  294802 retry.go:31] will retry after 1.579292426s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:58.153670  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:37:58.207570  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:58.207611  294802 retry.go:31] will retry after 2.561855368s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:37:58.332381  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:37:58.352565  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:37:58.406463  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:58.406499  294802 retry.go:31] will retry after 3.320673589s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:00.769892  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:38:00.826233  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:00.826266  294802 retry.go:31] will retry after 4.907315926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:38:00.831722  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:38:01.727475  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:38:01.783957  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:01.783988  294802 retry.go:31] will retry after 5.200894374s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:38:03.331581  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:38:05.734014  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:38:05.788789  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:05.788827  294802 retry.go:31] will retry after 8.842449513s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:38:05.831306  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:38:06.985654  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:38:07.039570  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:07.039602  294802 retry.go:31] will retry after 5.880928081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:38:08.332028  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:10.332338  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:12.831501  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:38:12.920663  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:38:12.975694  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:12.975728  294802 retry.go:31] will retry after 9.826083703s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:14.632446  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:38:14.686825  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:14.686867  294802 retry.go:31] will retry after 9.505095425s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:38:14.831629  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:17.331335  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:19.831401  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:21.831543  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:38:22.802041  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:38:22.856802  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:22.856841  294802 retry.go:31] will retry after 15.213528133s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:38:23.832129  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:38:24.192576  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:38:24.246238  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:24.246274  294802 retry.go:31] will retry after 14.095691585s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:38:26.331911  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:28.332033  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:30.332271  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:32.831436  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:35.332315  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:37.831386  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:38:38.070621  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:38:38.124037  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:38.124068  294802 retry.go:31] will retry after 13.377991476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:38.343149  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:38:38.398036  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:38.398070  294802 retry.go:31] will retry after 19.80899147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:38:39.831973  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:42.332276  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:44.831311  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:46.831556  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:49.331426  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:51.331667  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:38:51.502946  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:38:51.557113  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:51.557148  294802 retry.go:31] will retry after 33.041558736s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:38:53.332193  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:55.832098  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:38:58.207832  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:38:58.261550  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:58.261581  294802 retry.go:31] will retry after 25.385163195s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:38:58.332318  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:00.831317  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:02.831596  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:04.831671  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:06.832021  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:08.832385  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:10.832457  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:13.331498  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:15.332058  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:17.332300  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:19.831326  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:21.831589  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:39:23.647805  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:39:23.703175  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:39:23.703301  294802 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1025 09:39:23.832120  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:39:24.599620  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:39:24.653306  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:39:24.653433  294802 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1025 09:39:24.655098  294802 out.go:179] * Enabled addons: 
	I1025 09:39:24.656118  294802 addons.go:514] duration metric: took 1m30.942873555s for enable addons: enabled=[]
	W1025 09:39:26.331456  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:28.332157  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:30.832002  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:33.331688  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:35.332220  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:37.832018  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:40.331737  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:42.831578  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:44.832126  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:47.331905  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:49.831632  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:51.832264  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:54.331430  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:56.331812  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:58.831450  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:00.832060  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:03.331462  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:05.831446  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:07.832230  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:10.331774  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:12.831373  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:14.832059  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:17.332031  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:19.831734  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:21.832378  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:24.332221  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:26.832067  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:29.331836  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:31.831727  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:34.331584  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:36.332144  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:38.831657  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:40.832278  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:43.331865  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:45.831608  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:48.331438  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:50.331876  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:52.831450  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:54.832299  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:57.332093  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:59.831481  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:01.832016  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:04.331582  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:06.332056  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:08.831522  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:10.831951  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:13.331461  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:15.332092  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:17.831612  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:19.831829  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:22.331270  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:24.331839  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:26.831798  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:29.331322  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:31.331622  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:33.332230  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:35.831756  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:37.832286  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:40.331467  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:42.332183  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:44.832060  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:47.331743  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:49.332085  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:51.831789  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:53.832427  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:56.332228  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:58.832186  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:01.331677  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:03.831415  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:05.832023  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:08.331696  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:10.831560  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:13.331497  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:15.332087  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:17.831922  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:20.331529  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:22.332206  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:24.831626  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:26.832068  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:29.331585  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:31.332258  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:33.831796  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:36.331598  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:38.331958  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:40.831612  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:43.331492  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:45.331853  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:47.332296  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:49.831823  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:52.331702  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:54.831435  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:56.831945  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:59.331557  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:01.332218  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:03.831512  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:05.831984  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:08.331582  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:10.332231  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:12.832138  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:15.331767  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:17.332261  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:19.831576  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:21.832056  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:24.331475  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:26.331969  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:28.332110  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:30.831563  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:32.832308  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:34.832396  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:37.332072  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:39.831526  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:41.832147  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:44.331736  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:46.831260  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:48.831666  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:50.832372  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:53.331420  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:43:53.831043  294802 node_ready.go:38] duration metric: took 6m0.000241159s for node "test-preload-220541" to be "Ready" ...
	I1025 09:43:53.832972  294802 out.go:203] 
	W1025 09:43:53.834339  294802 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1025 09:43:53.834369  294802 out.go:285] * 
	* 
	W1025 09:43:53.835980  294802 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:43:53.837262  294802 out.go:203] 

                                                
                                                
** /stderr **
preload_test.go:67: out/minikube-linux-amd64 start -p test-preload-220541 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio failed: exit status 80
panic.go:636: *** TestPreload FAILED at 2025-10-25 09:43:53.874332492 +0000 UTC m=+2696.678504898
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect test-preload-220541
helpers_test.go:243: (dbg) docker inspect test-preload-220541:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6ed32a0b7f1e590e8f48d870fcd582ea227457a595c1bc7f44af2f15a5c08b9d",
	        "Created": "2025-10-25T09:36:40.470662439Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295041,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:37:47.166121361Z",
	            "FinishedAt": "2025-10-25T09:37:35.820126351Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/6ed32a0b7f1e590e8f48d870fcd582ea227457a595c1bc7f44af2f15a5c08b9d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6ed32a0b7f1e590e8f48d870fcd582ea227457a595c1bc7f44af2f15a5c08b9d/hostname",
	        "HostsPath": "/var/lib/docker/containers/6ed32a0b7f1e590e8f48d870fcd582ea227457a595c1bc7f44af2f15a5c08b9d/hosts",
	        "LogPath": "/var/lib/docker/containers/6ed32a0b7f1e590e8f48d870fcd582ea227457a595c1bc7f44af2f15a5c08b9d/6ed32a0b7f1e590e8f48d870fcd582ea227457a595c1bc7f44af2f15a5c08b9d-json.log",
	        "Name": "/test-preload-220541",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-220541:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "test-preload-220541",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6ed32a0b7f1e590e8f48d870fcd582ea227457a595c1bc7f44af2f15a5c08b9d",
	                "LowerDir": "/var/lib/docker/overlay2/477afaabae6d8bce67815331d2d9e6ea525d18c2adb7a52fcf76589e5ccb448f-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/477afaabae6d8bce67815331d2d9e6ea525d18c2adb7a52fcf76589e5ccb448f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/477afaabae6d8bce67815331d2d9e6ea525d18c2adb7a52fcf76589e5ccb448f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/477afaabae6d8bce67815331d2d9e6ea525d18c2adb7a52fcf76589e5ccb448f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-220541",
	                "Source": "/var/lib/docker/volumes/test-preload-220541/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-220541",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-220541",
	                "name.minikube.sigs.k8s.io": "test-preload-220541",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "856fbe2788fb211d4d5abe811c4680c92a81b5b0ae473b077e82baf189a2bdd7",
	            "SandboxKey": "/var/run/docker/netns/856fbe2788fb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-220541": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:2f:bd:26:60:2c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "490049e8af82eaf8734831a8a440d7ab1cbbad8642e5df4252f4190a29e4a05a",
	                    "EndpointID": "c84eddd18d6fa3e7b80df8213f97bba132d272192686f07f36fc5a335c7acb50",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "test-preload-220541",
	                        "6ed32a0b7f1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-220541 -n test-preload-220541
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-220541 -n test-preload-220541: exit status 2 (306.126242ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-220541 logs -n 25
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ multinode-815809 cp multinode-815809-m03:/home/docker/cp-test.txt multinode-815809:/home/docker/cp-test_multinode-815809-m03_multinode-815809.txt         │ multinode-815809     │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │ 25 Oct 25 09:33 UTC │
	│ ssh     │ multinode-815809 ssh -n multinode-815809-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-815809     │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │ 25 Oct 25 09:33 UTC │
	│ ssh     │ multinode-815809 ssh -n multinode-815809 sudo cat /home/docker/cp-test_multinode-815809-m03_multinode-815809.txt                                          │ multinode-815809     │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │ 25 Oct 25 09:33 UTC │
	│ cp      │ multinode-815809 cp multinode-815809-m03:/home/docker/cp-test.txt multinode-815809-m02:/home/docker/cp-test_multinode-815809-m03_multinode-815809-m02.txt │ multinode-815809     │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │ 25 Oct 25 09:33 UTC │
	│ ssh     │ multinode-815809 ssh -n multinode-815809-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-815809     │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │ 25 Oct 25 09:33 UTC │
	│ ssh     │ multinode-815809 ssh -n multinode-815809-m02 sudo cat /home/docker/cp-test_multinode-815809-m03_multinode-815809-m02.txt                                  │ multinode-815809     │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │ 25 Oct 25 09:33 UTC │
	│ node    │ multinode-815809 node stop m03                                                                                                                            │ multinode-815809     │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │ 25 Oct 25 09:33 UTC │
	│ node    │ multinode-815809 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-815809     │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │ 25 Oct 25 09:33 UTC │
	│ node    │ list -p multinode-815809                                                                                                                                  │ multinode-815809     │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │                     │
	│ stop    │ -p multinode-815809                                                                                                                                       │ multinode-815809     │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │ 25 Oct 25 09:34 UTC │
	│ start   │ -p multinode-815809 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-815809     │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:35 UTC │
	│ node    │ list -p multinode-815809                                                                                                                                  │ multinode-815809     │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ node    │ multinode-815809 node delete m03                                                                                                                          │ multinode-815809     │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ stop    │ multinode-815809 stop                                                                                                                                     │ multinode-815809     │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ start   │ -p multinode-815809 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio                                                          │ multinode-815809     │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:36 UTC │
	│ node    │ list -p multinode-815809                                                                                                                                  │ multinode-815809     │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ start   │ -p multinode-815809-m02 --driver=docker  --container-runtime=crio                                                                                         │ multinode-815809-m02 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ start   │ -p multinode-815809-m03 --driver=docker  --container-runtime=crio                                                                                         │ multinode-815809-m03 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ node    │ add -p multinode-815809                                                                                                                                   │ multinode-815809     │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ delete  │ -p multinode-815809-m03                                                                                                                                   │ multinode-815809-m03 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ delete  │ -p multinode-815809                                                                                                                                       │ multinode-815809     │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ start   │ -p test-preload-220541 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0 │ test-preload-220541  │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:37 UTC │
	│ image   │ test-preload-220541 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-220541  │ jenkins │ v1.37.0 │ 25 Oct 25 09:37 UTC │ 25 Oct 25 09:37 UTC │
	│ stop    │ -p test-preload-220541                                                                                                                                    │ test-preload-220541  │ jenkins │ v1.37.0 │ 25 Oct 25 09:37 UTC │ 25 Oct 25 09:37 UTC │
	│ start   │ -p test-preload-220541 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                         │ test-preload-220541  │ jenkins │ v1.37.0 │ 25 Oct 25 09:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:37:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:37:36.246984  294802 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:37:36.247236  294802 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:37:36.247245  294802 out.go:374] Setting ErrFile to fd 2...
	I1025 09:37:36.247250  294802 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:37:36.247458  294802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:37:36.247885  294802 out.go:368] Setting JSON to false
	I1025 09:37:36.248798  294802 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4800,"bootTime":1761380256,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:37:36.248850  294802 start.go:141] virtualization: kvm guest
	I1025 09:37:36.250686  294802 out.go:179] * [test-preload-220541] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:37:36.251854  294802 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:37:36.251865  294802 notify.go:220] Checking for updates...
	I1025 09:37:36.254818  294802 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:37:36.255804  294802 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:37:36.256870  294802 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:37:36.257913  294802 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:37:36.258942  294802 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:37:36.260259  294802 config.go:182] Loaded profile config "test-preload-220541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1025 09:37:36.261650  294802 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1025 09:37:36.262598  294802 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:37:36.286418  294802 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:37:36.286525  294802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:37:36.343064  294802 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-25 09:37:36.333052429 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:37:36.343176  294802 docker.go:318] overlay module found
	I1025 09:37:36.344754  294802 out.go:179] * Using the docker driver based on existing profile
	I1025 09:37:36.345820  294802 start.go:305] selected driver: docker
	I1025 09:37:36.345831  294802 start.go:925] validating driver "docker" against &{Name:test-preload-220541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-220541 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:37:36.345913  294802 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:37:36.346485  294802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:37:36.402169  294802 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-25 09:37:36.392733407 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:37:36.402592  294802 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:37:36.402622  294802 cni.go:84] Creating CNI manager for ""
	I1025 09:37:36.402675  294802 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:37:36.402712  294802 start.go:349] cluster config:
	{Name:test-preload-220541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-220541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:37:36.404405  294802 out.go:179] * Starting "test-preload-220541" primary control-plane node in "test-preload-220541" cluster
	I1025 09:37:36.405534  294802 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:37:36.406661  294802 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:37:36.407772  294802 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1025 09:37:36.407811  294802 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:37:36.427956  294802 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:37:36.427975  294802 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:37:36.752906  294802 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1025 09:37:36.752949  294802 cache.go:58] Caching tarball of preloaded images
	I1025 09:37:36.753122  294802 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1025 09:37:36.754854  294802 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1025 09:37:36.756133  294802 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1025 09:37:36.860713  294802 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1025 09:37:36.860758  294802 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1025 09:37:47.122566  294802 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1025 09:37:47.122760  294802 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/config.json ...
	I1025 09:37:47.123006  294802 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:37:47.123042  294802 start.go:360] acquireMachinesLock for test-preload-220541: {Name:mkbfcb547a313f19c64c690f47ad8dc7ad9c5624 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:37:47.123148  294802 start.go:364] duration metric: took 74.803µs to acquireMachinesLock for "test-preload-220541"
	I1025 09:37:47.123176  294802 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:37:47.123188  294802 fix.go:54] fixHost starting: 
	I1025 09:37:47.123469  294802 cli_runner.go:164] Run: docker container inspect test-preload-220541 --format={{.State.Status}}
	I1025 09:37:47.140961  294802 fix.go:112] recreateIfNeeded on test-preload-220541: state=Stopped err=<nil>
	W1025 09:37:47.140992  294802 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 09:37:47.142660  294802 out.go:252] * Restarting existing docker container for "test-preload-220541" ...
	I1025 09:37:47.142737  294802 cli_runner.go:164] Run: docker start test-preload-220541
	I1025 09:37:47.368966  294802 cli_runner.go:164] Run: docker container inspect test-preload-220541 --format={{.State.Status}}
	I1025 09:37:47.387025  294802 kic.go:430] container "test-preload-220541" state is running.
	I1025 09:37:47.387433  294802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-220541
	I1025 09:37:47.404916  294802 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/config.json ...
	I1025 09:37:47.405123  294802 machine.go:93] provisionDockerMachine start ...
	I1025 09:37:47.405193  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:47.423737  294802 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:47.424008  294802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1025 09:37:47.424022  294802 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:37:47.424738  294802 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52452->127.0.0.1:33078: read: connection reset by peer
	I1025 09:37:50.566441  294802 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-220541
	
	I1025 09:37:50.566474  294802 ubuntu.go:182] provisioning hostname "test-preload-220541"
	I1025 09:37:50.566535  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:50.583943  294802 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:50.584153  294802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1025 09:37:50.584166  294802 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-220541 && echo "test-preload-220541" | sudo tee /etc/hostname
	I1025 09:37:50.734081  294802 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-220541
	
	I1025 09:37:50.734158  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:50.752940  294802 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:50.753181  294802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1025 09:37:50.753207  294802 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-220541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-220541/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-220541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:37:50.893088  294802 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:37:50.893126  294802 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 09:37:50.893176  294802 ubuntu.go:190] setting up certificates
	I1025 09:37:50.893186  294802 provision.go:84] configureAuth start
	I1025 09:37:50.893240  294802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-220541
	I1025 09:37:50.910777  294802 provision.go:143] copyHostCerts
	I1025 09:37:50.910853  294802 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem, removing ...
	I1025 09:37:50.910877  294802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:37:50.910959  294802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 09:37:50.911106  294802 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem, removing ...
	I1025 09:37:50.911119  294802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:37:50.911164  294802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 09:37:50.911262  294802 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem, removing ...
	I1025 09:37:50.911273  294802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:37:50.911310  294802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 09:37:50.911417  294802 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.test-preload-220541 san=[127.0.0.1 192.168.76.2 localhost minikube test-preload-220541]
	I1025 09:37:51.005753  294802 provision.go:177] copyRemoteCerts
	I1025 09:37:51.005825  294802 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:37:51.005864  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:51.024019  294802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/test-preload-220541/id_rsa Username:docker}
	I1025 09:37:51.123542  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:37:51.141015  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 09:37:51.158033  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:37:51.174958  294802 provision.go:87] duration metric: took 281.746294ms to configureAuth
	I1025 09:37:51.174987  294802 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:37:51.175186  294802 config.go:182] Loaded profile config "test-preload-220541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1025 09:37:51.175326  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:51.192152  294802 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:51.192440  294802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1025 09:37:51.192466  294802 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:37:51.463562  294802 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:37:51.463595  294802 machine.go:96] duration metric: took 4.058458222s to provisionDockerMachine
	I1025 09:37:51.463612  294802 start.go:293] postStartSetup for "test-preload-220541" (driver="docker")
	I1025 09:37:51.463626  294802 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:37:51.463705  294802 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:37:51.463769  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:51.481432  294802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/test-preload-220541/id_rsa Username:docker}
	I1025 09:37:51.580717  294802 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:37:51.584212  294802 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:37:51.584239  294802 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:37:51.584248  294802 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 09:37:51.584308  294802 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 09:37:51.584429  294802 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> 1341452.pem in /etc/ssl/certs
	I1025 09:37:51.584548  294802 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:37:51.592003  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:37:51.609394  294802 start.go:296] duration metric: took 145.765255ms for postStartSetup
	I1025 09:37:51.609487  294802 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:37:51.609536  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:51.627259  294802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/test-preload-220541/id_rsa Username:docker}
	I1025 09:37:51.723519  294802 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:37:51.728058  294802 fix.go:56] duration metric: took 4.604861843s for fixHost
	I1025 09:37:51.728083  294802 start.go:83] releasing machines lock for "test-preload-220541", held for 4.604917827s
	I1025 09:37:51.728163  294802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-220541
	I1025 09:37:51.745580  294802 ssh_runner.go:195] Run: cat /version.json
	I1025 09:37:51.745617  294802 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:37:51.745627  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:51.745693  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:51.764037  294802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/test-preload-220541/id_rsa Username:docker}
	I1025 09:37:51.764383  294802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/test-preload-220541/id_rsa Username:docker}
	I1025 09:37:51.911767  294802 ssh_runner.go:195] Run: systemctl --version
	I1025 09:37:51.918202  294802 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:37:51.952221  294802 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:37:51.956955  294802 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:37:51.957024  294802 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:37:51.965310  294802 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:37:51.965341  294802 start.go:495] detecting cgroup driver to use...
	I1025 09:37:51.965390  294802 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:37:51.965442  294802 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:37:51.979148  294802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:37:51.991138  294802 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:37:51.991206  294802 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:37:52.005171  294802 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:37:52.017390  294802 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:37:52.097007  294802 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:37:52.176743  294802 docker.go:234] disabling docker service ...
	I1025 09:37:52.176803  294802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:37:52.190792  294802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:37:52.202900  294802 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:37:52.284179  294802 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:37:52.363042  294802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:37:52.375930  294802 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:37:52.389686  294802 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1025 09:37:52.389738  294802 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:52.398482  294802 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:37:52.398542  294802 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:52.407304  294802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:52.416159  294802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:52.424588  294802 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:37:52.432477  294802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:52.441087  294802 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:52.449245  294802 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:52.457702  294802 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:37:52.464864  294802 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:37:52.471924  294802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:37:52.548424  294802 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:37:52.654917  294802 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:37:52.654974  294802 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:37:52.658931  294802 start.go:563] Will wait 60s for crictl version
	I1025 09:37:52.658998  294802 ssh_runner.go:195] Run: which crictl
	I1025 09:37:52.662324  294802 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:37:52.686626  294802 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:37:52.686701  294802 ssh_runner.go:195] Run: crio --version
	I1025 09:37:52.713118  294802 ssh_runner.go:195] Run: crio --version
	I1025 09:37:52.741734  294802 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.34.1 ...
	I1025 09:37:52.743115  294802 cli_runner.go:164] Run: docker network inspect test-preload-220541 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:37:52.760167  294802 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 09:37:52.764127  294802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:37:52.774274  294802 kubeadm.go:883] updating cluster {Name:test-preload-220541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-220541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:37:52.774429  294802 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1025 09:37:52.774495  294802 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:37:52.804631  294802 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:37:52.804653  294802 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:37:52.804700  294802 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:37:52.829281  294802 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:37:52.829307  294802 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:37:52.829316  294802 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.0 crio true true} ...
	I1025 09:37:52.829448  294802 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=test-preload-220541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-220541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:37:52.829533  294802 ssh_runner.go:195] Run: crio config
	I1025 09:37:52.875121  294802 cni.go:84] Creating CNI manager for ""
	I1025 09:37:52.875142  294802 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:37:52.875160  294802 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:37:52.875182  294802 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-220541 NodeName:test-preload-220541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:37:52.875304  294802 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-220541"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:37:52.875385  294802 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1025 09:37:52.883544  294802 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:37:52.883646  294802 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:37:52.891313  294802 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1025 09:37:52.904034  294802 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:37:52.916557  294802 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1025 09:37:52.928841  294802 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:37:52.932557  294802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:37:52.942273  294802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:37:53.020284  294802 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:37:53.043691  294802 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541 for IP: 192.168.76.2
	I1025 09:37:53.043712  294802 certs.go:195] generating shared ca certs ...
	I1025 09:37:53.043739  294802 certs.go:227] acquiring lock for ca certs: {Name:mk84f00dc0ba6e3a6eb84ff47b0ea60692217fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:53.043930  294802 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key
	I1025 09:37:53.044000  294802 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key
	I1025 09:37:53.044020  294802 certs.go:257] generating profile certs ...
	I1025 09:37:53.044120  294802 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/client.key
	I1025 09:37:53.044194  294802 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/apiserver.key.87d9b87c
	I1025 09:37:53.044249  294802 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/proxy-client.key
	I1025 09:37:53.044412  294802 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem (1338 bytes)
	W1025 09:37:53.044451  294802 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145_empty.pem, impossibly tiny 0 bytes
	I1025 09:37:53.044463  294802 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:37:53.044490  294802 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:37:53.044519  294802 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:37:53.044548  294802 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem (1675 bytes)
	I1025 09:37:53.044602  294802 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:37:53.045398  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:37:53.064105  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:37:53.083041  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:37:53.102395  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:37:53.126553  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 09:37:53.144341  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:37:53.161323  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:37:53.178078  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:37:53.194573  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:37:53.211542  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem --> /usr/share/ca-certificates/134145.pem (1338 bytes)
	I1025 09:37:53.228290  294802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /usr/share/ca-certificates/1341452.pem (1708 bytes)
	I1025 09:37:53.246330  294802 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:37:53.258308  294802 ssh_runner.go:195] Run: openssl version
	I1025 09:37:53.264182  294802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:37:53.272475  294802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:37:53.276048  294802 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:59 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:37:53.276097  294802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:37:53.310040  294802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:37:53.318399  294802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134145.pem && ln -fs /usr/share/ca-certificates/134145.pem /etc/ssl/certs/134145.pem"
	I1025 09:37:53.326651  294802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134145.pem
	I1025 09:37:53.330462  294802 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:05 /usr/share/ca-certificates/134145.pem
	I1025 09:37:53.330520  294802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134145.pem
	I1025 09:37:53.364139  294802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134145.pem /etc/ssl/certs/51391683.0"
	I1025 09:37:53.372476  294802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1341452.pem && ln -fs /usr/share/ca-certificates/1341452.pem /etc/ssl/certs/1341452.pem"
	I1025 09:37:53.380797  294802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1341452.pem
	I1025 09:37:53.384664  294802 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:05 /usr/share/ca-certificates/1341452.pem
	I1025 09:37:53.384729  294802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1341452.pem
	I1025 09:37:53.419665  294802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1341452.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:37:53.428321  294802 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:37:53.432300  294802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:37:53.466429  294802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:37:53.500094  294802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:37:53.533944  294802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:37:53.577251  294802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:37:53.619160  294802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:37:53.660478  294802 kubeadm.go:400] StartCluster: {Name:test-preload-220541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-220541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:37:53.660578  294802 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:37:53.660631  294802 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:37:53.687677  294802 cri.go:89] found id: ""
	I1025 09:37:53.687746  294802 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:37:53.695837  294802 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:37:53.695857  294802 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:37:53.695926  294802 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:37:53.703032  294802 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:37:53.703452  294802 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-220541" does not appear in /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:37:53.703571  294802 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-130604/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-220541" cluster setting kubeconfig missing "test-preload-220541" context setting]
	I1025 09:37:53.703861  294802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:53.704316  294802 kapi.go:59] client config for test-preload-220541: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/client.crt", KeyFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/client.key", CAFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:37:53.704714  294802 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 09:37:53.704727  294802 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1025 09:37:53.704732  294802 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1025 09:37:53.704735  294802 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 09:37:53.704742  294802 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1025 09:37:53.705067  294802 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:37:53.712285  294802 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1025 09:37:53.712316  294802 kubeadm.go:601] duration metric: took 16.45319ms to restartPrimaryControlPlane
	I1025 09:37:53.712326  294802 kubeadm.go:402] duration metric: took 51.858815ms to StartCluster
	I1025 09:37:53.712354  294802 settings.go:142] acquiring lock: {Name:mke1e64be0ec6edf2eef6e52eb10d83b59bb8c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:53.712421  294802 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:37:53.712996  294802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:53.713192  294802 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:37:53.713245  294802 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:37:53.713355  294802 addons.go:69] Setting storage-provisioner=true in profile "test-preload-220541"
	I1025 09:37:53.713369  294802 config.go:182] Loaded profile config "test-preload-220541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1025 09:37:53.713374  294802 addons.go:238] Setting addon storage-provisioner=true in "test-preload-220541"
	W1025 09:37:53.713417  294802 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:37:53.713444  294802 host.go:66] Checking if "test-preload-220541" exists ...
	I1025 09:37:53.713374  294802 addons.go:69] Setting default-storageclass=true in profile "test-preload-220541"
	I1025 09:37:53.713498  294802 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-220541"
	I1025 09:37:53.713750  294802 cli_runner.go:164] Run: docker container inspect test-preload-220541 --format={{.State.Status}}
	I1025 09:37:53.713818  294802 cli_runner.go:164] Run: docker container inspect test-preload-220541 --format={{.State.Status}}
	I1025 09:37:53.716526  294802 out.go:179] * Verifying Kubernetes components...
	I1025 09:37:53.717665  294802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:37:53.733508  294802 kapi.go:59] client config for test-preload-220541: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/client.crt", KeyFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/profiles/test-preload-220541/client.key", CAFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:37:53.733750  294802 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:37:53.733907  294802 addons.go:238] Setting addon default-storageclass=true in "test-preload-220541"
	W1025 09:37:53.733927  294802 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:37:53.733959  294802 host.go:66] Checking if "test-preload-220541" exists ...
	I1025 09:37:53.734443  294802 cli_runner.go:164] Run: docker container inspect test-preload-220541 --format={{.State.Status}}
	I1025 09:37:53.735399  294802 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:37:53.735424  294802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:37:53.735486  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:53.760642  294802 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:37:53.760683  294802 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:37:53.760754  294802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-220541
	I1025 09:37:53.761518  294802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/test-preload-220541/id_rsa Username:docker}
	I1025 09:37:53.782688  294802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/test-preload-220541/id_rsa Username:docker}
	I1025 09:37:53.817906  294802 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:37:53.830735  294802 node_ready.go:35] waiting up to 6m0s for node "test-preload-220541" to be "Ready" ...
	I1025 09:37:53.869673  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:37:53.888423  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:37:53.926890  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:53.926938  294802 retry.go:31] will retry after 229.977996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:37:53.942698  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:53.942730  294802 retry.go:31] will retry after 317.889005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.158112  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:37:54.212204  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.212242  294802 retry.go:31] will retry after 262.272462ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.261467  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:37:54.316490  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.316524  294802 retry.go:31] will retry after 273.444312ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.475366  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:37:54.528886  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.528914  294802 retry.go:31] will retry after 340.586386ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.591148  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:37:54.647387  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.647418  294802 retry.go:31] will retry after 349.59827ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.870466  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:37:54.925561  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.925596  294802 retry.go:31] will retry after 426.813175ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:54.997898  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:37:55.052701  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:55.052735  294802 retry.go:31] will retry after 482.685044ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:55.352591  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:37:55.407065  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:55.407099  294802 retry.go:31] will retry after 1.31010726s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:55.536373  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:37:55.590488  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:55.590522  294802 retry.go:31] will retry after 1.057724306s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:37:55.832339  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:37:56.648606  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:37:56.701016  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:56.701056  294802 retry.go:31] will retry after 1.452130445s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:56.718235  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:37:56.772318  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:56.772361  294802 retry.go:31] will retry after 1.579292426s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:58.153670  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:37:58.207570  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:58.207611  294802 retry.go:31] will retry after 2.561855368s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:37:58.332381  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:37:58.352565  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:37:58.406463  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:37:58.406499  294802 retry.go:31] will retry after 3.320673589s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:00.769892  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:38:00.826233  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:00.826266  294802 retry.go:31] will retry after 4.907315926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:38:00.831722  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:38:01.727475  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:38:01.783957  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:01.783988  294802 retry.go:31] will retry after 5.200894374s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:38:03.331581  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:38:05.734014  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:38:05.788789  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:05.788827  294802 retry.go:31] will retry after 8.842449513s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:38:05.831306  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:38:06.985654  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:38:07.039570  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:07.039602  294802 retry.go:31] will retry after 5.880928081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:38:08.332028  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:10.332338  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:12.831501  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:38:12.920663  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:38:12.975694  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:12.975728  294802 retry.go:31] will retry after 9.826083703s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:14.632446  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:38:14.686825  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:14.686867  294802 retry.go:31] will retry after 9.505095425s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:38:14.831629  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:17.331335  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:19.831401  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:21.831543  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:38:22.802041  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:38:22.856802  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:22.856841  294802 retry.go:31] will retry after 15.213528133s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:38:23.832129  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:38:24.192576  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:38:24.246238  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:24.246274  294802 retry.go:31] will retry after 14.095691585s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:38:26.331911  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:28.332033  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:30.332271  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:32.831436  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:35.332315  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:37.831386  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:38:38.070621  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:38:38.124037  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:38.124068  294802 retry.go:31] will retry after 13.377991476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:38.343149  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:38:38.398036  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:38.398070  294802 retry.go:31] will retry after 19.80899147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:38:39.831973  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:42.332276  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:44.831311  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:46.831556  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:49.331426  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:51.331667  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:38:51.502946  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:38:51.557113  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:51.557148  294802 retry.go:31] will retry after 33.041558736s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:38:53.332193  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:38:55.832098  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:38:58.207832  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:38:58.261550  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:38:58.261581  294802 retry.go:31] will retry after 25.385163195s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:38:58.332318  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:00.831317  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:02.831596  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:04.831671  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:06.832021  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:08.832385  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:10.832457  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:13.331498  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:15.332058  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:17.332300  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:19.831326  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:21.831589  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:39:23.647805  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:39:23.703175  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:39:23.703301  294802 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1025 09:39:23.832120  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:39:24.599620  294802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1025 09:39:24.653306  294802 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:39:24.653433  294802 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1025 09:39:24.655098  294802 out.go:179] * Enabled addons: 
	I1025 09:39:24.656118  294802 addons.go:514] duration metric: took 1m30.942873555s for enable addons: enabled=[]
	W1025 09:39:26.331456  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:28.332157  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:30.832002  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:33.331688  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:35.332220  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:37.832018  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:40.331737  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:42.831578  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:44.832126  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:47.331905  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:49.831632  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:51.832264  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:54.331430  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:56.331812  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:39:58.831450  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:00.832060  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:03.331462  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:05.831446  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:07.832230  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:10.331774  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:12.831373  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:14.832059  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:17.332031  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:19.831734  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:21.832378  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:24.332221  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:26.832067  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:29.331836  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:31.831727  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:34.331584  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:36.332144  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:38.831657  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:40.832278  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:43.331865  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:45.831608  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:48.331438  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:50.331876  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:52.831450  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:54.832299  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:57.332093  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:40:59.831481  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:01.832016  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:04.331582  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:06.332056  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:08.831522  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:10.831951  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:13.331461  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:15.332092  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:17.831612  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:19.831829  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:22.331270  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:24.331839  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:26.831798  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:29.331322  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:31.331622  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:33.332230  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:35.831756  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:37.832286  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:40.331467  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:42.332183  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:44.832060  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:47.331743  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:49.332085  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:51.831789  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:53.832427  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:56.332228  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:41:58.832186  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:01.331677  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:03.831415  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:05.832023  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:08.331696  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:10.831560  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:13.331497  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:15.332087  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:17.831922  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:20.331529  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:22.332206  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:24.831626  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:26.832068  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:29.331585  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:31.332258  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:33.831796  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:36.331598  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:38.331958  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:40.831612  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:43.331492  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:45.331853  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:47.332296  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:49.831823  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:52.331702  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:54.831435  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:56.831945  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:42:59.331557  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:01.332218  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:03.831512  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:05.831984  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:08.331582  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:10.332231  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:12.832138  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:15.331767  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:17.332261  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:19.831576  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:21.832056  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:24.331475  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:26.331969  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:28.332110  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:30.831563  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:32.832308  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:34.832396  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:37.332072  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:39.831526  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:41.832147  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:44.331736  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:46.831260  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:48.831666  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:50.832372  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 09:43:53.331420  294802 node_ready.go:55] error getting node "test-preload-220541" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-220541": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 09:43:53.831043  294802 node_ready.go:38] duration metric: took 6m0.000241159s for node "test-preload-220541" to be "Ready" ...
	I1025 09:43:53.832972  294802 out.go:203] 
	W1025 09:43:53.834339  294802 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1025 09:43:53.834369  294802 out.go:285] * 
	W1025 09:43:53.835980  294802 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:43:53.837262  294802 out.go:203] 
	
	
	==> CRI-O <==
	Oct 25 09:39:19 test-preload-220541 crio[547]: time="2025-10-25T09:39:19.157919218Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/45e11c3e6730b48a5e2f1aa01f55b903cbe391407041bda778b89f2c334e46fa/merged\": directory not empty" id=5e926966-aac5-4eb4-9f26-54e54b8de7c7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:40:02 test-preload-220541 crio[547]: time="2025-10-25T09:40:02.406168045Z" level=info msg="createCtr: deleting container b145b4a1f202ce3060cc69ad7d5b7729f03f46eedd8085520ab45c5008d68e09 from storage" id=21f333ae-9028-4df5-8ce1-b3bd17d84b21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:40:02 test-preload-220541 crio[547]: time="2025-10-25T09:40:02.40620798Z" level=info msg="createCtr: deleting container e3050f2b206cf71e7fdc7cd1e14ad46611d04b1c39e0ab6c69da878ae9f795f2 from storage" id=d070c844-42cb-4ba1-98d5-6efae16807a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:40:02 test-preload-220541 crio[547]: time="2025-10-25T09:40:02.406442275Z" level=info msg="createCtr: deleting container aaed88ea1dbb29293e449b169e5e0adad415b7a3e11da92b07d06305f1bd10e1 from storage" id=177307c7-b376-4d15-8577-2eebba893d8f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:40:02 test-preload-220541 crio[547]: time="2025-10-25T09:40:02.40675264Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/bf6ddad07b556c00b66052171d8663cd18d1235935811b42a948a9c3bfcaebf2/merged\": directory not empty" id=21f333ae-9028-4df5-8ce1-b3bd17d84b21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:40:02 test-preload-220541 crio[547]: time="2025-10-25T09:40:02.406883008Z" level=info msg="createCtr: deleting container e6fd8666358b0aa7cce1db89be7fe5fe934803216e07da78ccf3f822f87e7609 from storage" id=5e926966-aac5-4eb4-9f26-54e54b8de7c7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:40:02 test-preload-220541 crio[547]: time="2025-10-25T09:40:02.407048568Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/4d45a46202282856ca6dec8f8abc045a73b2488f5d0479da4fc2b349a104a8a0/merged\": directory not empty" id=d070c844-42cb-4ba1-98d5-6efae16807a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:40:02 test-preload-220541 crio[547]: time="2025-10-25T09:40:02.407323226Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/b83a233d70a5109f1e94c88a5845e670c99e1b4c87a977861f6a5863ccad5a74/merged\": directory not empty" id=177307c7-b376-4d15-8577-2eebba893d8f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:40:02 test-preload-220541 crio[547]: time="2025-10-25T09:40:02.407548374Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/45e11c3e6730b48a5e2f1aa01f55b903cbe391407041bda778b89f2c334e46fa/merged\": directory not empty" id=5e926966-aac5-4eb4-9f26-54e54b8de7c7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:41:07 test-preload-220541 crio[547]: time="2025-10-25T09:41:07.280837388Z" level=info msg="createCtr: deleting container e3050f2b206cf71e7fdc7cd1e14ad46611d04b1c39e0ab6c69da878ae9f795f2 from storage" id=d070c844-42cb-4ba1-98d5-6efae16807a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:41:07 test-preload-220541 crio[547]: time="2025-10-25T09:41:07.280910717Z" level=info msg="createCtr: deleting container e6fd8666358b0aa7cce1db89be7fe5fe934803216e07da78ccf3f822f87e7609 from storage" id=5e926966-aac5-4eb4-9f26-54e54b8de7c7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:41:07 test-preload-220541 crio[547]: time="2025-10-25T09:41:07.280887095Z" level=info msg="createCtr: deleting container b145b4a1f202ce3060cc69ad7d5b7729f03f46eedd8085520ab45c5008d68e09 from storage" id=21f333ae-9028-4df5-8ce1-b3bd17d84b21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:41:07 test-preload-220541 crio[547]: time="2025-10-25T09:41:07.280938522Z" level=info msg="createCtr: deleting container aaed88ea1dbb29293e449b169e5e0adad415b7a3e11da92b07d06305f1bd10e1 from storage" id=177307c7-b376-4d15-8577-2eebba893d8f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:41:07 test-preload-220541 crio[547]: time="2025-10-25T09:41:07.281241213Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/45e11c3e6730b48a5e2f1aa01f55b903cbe391407041bda778b89f2c334e46fa/merged\": directory not empty" id=5e926966-aac5-4eb4-9f26-54e54b8de7c7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:41:07 test-preload-220541 crio[547]: time="2025-10-25T09:41:07.281505135Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/bf6ddad07b556c00b66052171d8663cd18d1235935811b42a948a9c3bfcaebf2/merged\": directory not empty" id=21f333ae-9028-4df5-8ce1-b3bd17d84b21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:41:07 test-preload-220541 crio[547]: time="2025-10-25T09:41:07.281707769Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/4d45a46202282856ca6dec8f8abc045a73b2488f5d0479da4fc2b349a104a8a0/merged\": directory not empty" id=d070c844-42cb-4ba1-98d5-6efae16807a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:41:07 test-preload-220541 crio[547]: time="2025-10-25T09:41:07.281867107Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/b83a233d70a5109f1e94c88a5845e670c99e1b4c87a977861f6a5863ccad5a74/merged\": directory not empty" id=177307c7-b376-4d15-8577-2eebba893d8f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:42:44 test-preload-220541 crio[547]: time="2025-10-25T09:42:44.59168795Z" level=info msg="createCtr: deleting container e3050f2b206cf71e7fdc7cd1e14ad46611d04b1c39e0ab6c69da878ae9f795f2 from storage" id=d070c844-42cb-4ba1-98d5-6efae16807a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:42:44 test-preload-220541 crio[547]: time="2025-10-25T09:42:44.591744254Z" level=info msg="createCtr: deleting container e6fd8666358b0aa7cce1db89be7fe5fe934803216e07da78ccf3f822f87e7609 from storage" id=5e926966-aac5-4eb4-9f26-54e54b8de7c7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:42:44 test-preload-220541 crio[547]: time="2025-10-25T09:42:44.591791752Z" level=info msg="createCtr: deleting container aaed88ea1dbb29293e449b169e5e0adad415b7a3e11da92b07d06305f1bd10e1 from storage" id=177307c7-b376-4d15-8577-2eebba893d8f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:42:44 test-preload-220541 crio[547]: time="2025-10-25T09:42:44.591795231Z" level=info msg="createCtr: deleting container b145b4a1f202ce3060cc69ad7d5b7729f03f46eedd8085520ab45c5008d68e09 from storage" id=21f333ae-9028-4df5-8ce1-b3bd17d84b21 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:42:44 test-preload-220541 crio[547]: time="2025-10-25T09:42:44.592211446Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/4d45a46202282856ca6dec8f8abc045a73b2488f5d0479da4fc2b349a104a8a0/merged\": directory not empty" id=d070c844-42cb-4ba1-98d5-6efae16807a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:42:44 test-preload-220541 crio[547]: time="2025-10-25T09:42:44.592550779Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/45e11c3e6730b48a5e2f1aa01f55b903cbe391407041bda778b89f2c334e46fa/merged\": directory not empty" id=5e926966-aac5-4eb4-9f26-54e54b8de7c7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:42:44 test-preload-220541 crio[547]: time="2025-10-25T09:42:44.592714141Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/b83a233d70a5109f1e94c88a5845e670c99e1b4c87a977861f6a5863ccad5a74/merged\": directory not empty" id=177307c7-b376-4d15-8577-2eebba893d8f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:42:44 test-preload-220541 crio[547]: time="2025-10-25T09:42:44.59288613Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/bf6ddad07b556c00b66052171d8663cd18d1235935811b42a948a9c3bfcaebf2/merged\": directory not empty" id=21f333ae-9028-4df5-8ce1-b3bd17d84b21 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 1c f5 68 9f 00 08 06
	[  +4.451388] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0e 07 4a e3 be 93 08 06
	[Oct25 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.025995] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.023888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.023905] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.024896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.022924] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +2.047850] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +4.031640] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +8.511323] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[ +16.382644] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[Oct25 09:03] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	
	
	==> kernel <==
	 09:43:54 up  1:26,  0 user,  load average: 0.03, 0.36, 0.77
	Linux test-preload-220541 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 25 09:43:24 test-preload-220541 kubelet[709]: E1025 09:43:24.750816     709 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dtest-preload-220541&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	Oct 25 09:43:27 test-preload-220541 kubelet[709]: E1025 09:43:27.766847     709 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-220541?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 25 09:43:27 test-preload-220541 kubelet[709]: I1025 09:43:27.930459     709 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-220541"
	Oct 25 09:43:27 test-preload-220541 kubelet[709]: E1025 09:43:27.930872     709 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-220541"
	Oct 25 09:43:28 test-preload-220541 kubelet[709]: E1025 09:43:28.918730     709 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{test-preload-220541.1871b2644c011dd2  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:test-preload-220541,UID:test-preload-220541,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node test-preload-220541 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:test-preload-220541,},FirstTimestamp:2025-10-25 09:37:53.120189906 +0000 UTC m=+0.073969678,LastTimestamp:2025-10-25 09:37:53.120189906 +0000 UTC m=+0.073969678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance
:test-preload-220541,}"
	Oct 25 09:43:29 test-preload-220541 kubelet[709]: W1025 09:43:29.532833     709 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Oct 25 09:43:29 test-preload-220541 kubelet[709]: E1025 09:43:29.532911     709 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	Oct 25 09:43:33 test-preload-220541 kubelet[709]: E1025 09:43:33.147151     709 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test-preload-220541\" not found"
	Oct 25 09:43:34 test-preload-220541 kubelet[709]: E1025 09:43:34.768137     709 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-220541?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 25 09:43:34 test-preload-220541 kubelet[709]: I1025 09:43:34.933069     709 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-220541"
	Oct 25 09:43:34 test-preload-220541 kubelet[709]: E1025 09:43:34.933417     709 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-220541"
	Oct 25 09:43:37 test-preload-220541 kubelet[709]: W1025 09:43:37.486936     709 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Oct 25 09:43:37 test-preload-220541 kubelet[709]: E1025 09:43:37.487017     709 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	Oct 25 09:43:38 test-preload-220541 kubelet[709]: E1025 09:43:38.919435     709 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{test-preload-220541.1871b2644c011dd2  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:test-preload-220541,UID:test-preload-220541,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node test-preload-220541 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:test-preload-220541,},FirstTimestamp:2025-10-25 09:37:53.120189906 +0000 UTC m=+0.073969678,LastTimestamp:2025-10-25 09:37:53.120189906 +0000 UTC m=+0.073969678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance
:test-preload-220541,}"
	Oct 25 09:43:38 test-preload-220541 kubelet[709]: W1025 09:43:38.946991     709 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Oct 25 09:43:38 test-preload-220541 kubelet[709]: E1025 09:43:38.947062     709 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	Oct 25 09:43:41 test-preload-220541 kubelet[709]: E1025 09:43:41.769388     709 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-220541?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 25 09:43:41 test-preload-220541 kubelet[709]: I1025 09:43:41.934816     709 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-220541"
	Oct 25 09:43:41 test-preload-220541 kubelet[709]: E1025 09:43:41.935270     709 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-220541"
	Oct 25 09:43:43 test-preload-220541 kubelet[709]: E1025 09:43:43.148175     709 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test-preload-220541\" not found"
	Oct 25 09:43:48 test-preload-220541 kubelet[709]: E1025 09:43:48.770639     709 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-220541?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 25 09:43:48 test-preload-220541 kubelet[709]: E1025 09:43:48.920429     709 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{test-preload-220541.1871b2644c011dd2  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:test-preload-220541,UID:test-preload-220541,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node test-preload-220541 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:test-preload-220541,},FirstTimestamp:2025-10-25 09:37:53.120189906 +0000 UTC m=+0.073969678,LastTimestamp:2025-10-25 09:37:53.120189906 +0000 UTC m=+0.073969678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance
:test-preload-220541,}"
	Oct 25 09:43:48 test-preload-220541 kubelet[709]: I1025 09:43:48.936583     709 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-220541"
	Oct 25 09:43:48 test-preload-220541 kubelet[709]: E1025 09:43:48.936978     709 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-220541"
	Oct 25 09:43:53 test-preload-220541 kubelet[709]: E1025 09:43:53.149220     709 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test-preload-220541\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-220541 -n test-preload-220541
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-220541 -n test-preload-220541: exit status 2 (294.44341ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "test-preload-220541" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-220541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-220541
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-220541: (2.38852447s)
--- FAIL: TestPreload (437.96s)

                                                
                                    
x
+
TestPause/serial/Pause (5.58s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-175355 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-175355 --alsologtostderr -v=5: exit status 80 (1.845363199s)

                                                
                                                
-- stdout --
	* Pausing node pause-175355 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:46:39.087002  320434 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:46:39.087273  320434 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:46:39.087284  320434 out.go:374] Setting ErrFile to fd 2...
	I1025 09:46:39.087291  320434 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:46:39.087608  320434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:46:39.087946  320434 out.go:368] Setting JSON to false
	I1025 09:46:39.087992  320434 mustload.go:65] Loading cluster: pause-175355
	I1025 09:46:39.088518  320434 config.go:182] Loaded profile config "pause-175355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:46:39.089094  320434 cli_runner.go:164] Run: docker container inspect pause-175355 --format={{.State.Status}}
	I1025 09:46:39.110851  320434 host.go:66] Checking if "pause-175355" exists ...
	I1025 09:46:39.111239  320434 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:46:39.191481  320434 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:88 SystemTime:2025-10-25 09:46:39.180103903 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:46:39.192513  320434 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-175355 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 09:46:39.197458  320434 out.go:179] * Pausing node pause-175355 ... 
	I1025 09:46:39.202440  320434 host.go:66] Checking if "pause-175355" exists ...
	I1025 09:46:39.203687  320434 ssh_runner.go:195] Run: systemctl --version
	I1025 09:46:39.203786  320434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-175355
	I1025 09:46:39.228705  320434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/pause-175355/id_rsa Username:docker}
	I1025 09:46:39.341256  320434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:46:39.358889  320434 pause.go:52] kubelet running: true
	I1025 09:46:39.359028  320434 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:46:39.548462  320434 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:46:39.548556  320434 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:46:39.632221  320434 cri.go:89] found id: "9e2c562768d3ef40e165f634b295addfa5cb5274c101de18e0ab8490c95e8116"
	I1025 09:46:39.632248  320434 cri.go:89] found id: "a03995ac4fccce122a9c1002f2459b9981e66d7e2725203dabc6c68c7494cc34"
	I1025 09:46:39.632254  320434 cri.go:89] found id: "5a8d746e97640c6b363685a4b3c2c1a0914d9ab9fe85a43546d4e990f24b8958"
	I1025 09:46:39.632258  320434 cri.go:89] found id: "ad3565a09ce225576f9b373c237fdbcf567fad3f696a31c2511744550b317595"
	I1025 09:46:39.632263  320434 cri.go:89] found id: "7d005017d0bf28c57b242a453d705732d305644fa622751689bb381580ae0cc9"
	I1025 09:46:39.632268  320434 cri.go:89] found id: "6c4196345d8b07529fb0ecb3b623137f5db068aabe551ff91243a064f9e8040e"
	I1025 09:46:39.632273  320434 cri.go:89] found id: "e5339095bdcaff6188dc8b28e527f13dbdfc4e3e0f4daa38be912228b489c18d"
	I1025 09:46:39.632277  320434 cri.go:89] found id: ""
	I1025 09:46:39.632335  320434 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:46:39.645264  320434 retry.go:31] will retry after 337.135321ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:46:39Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:46:39.982683  320434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:46:39.996156  320434 pause.go:52] kubelet running: false
	I1025 09:46:39.996208  320434 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:46:40.117980  320434 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:46:40.118067  320434 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:46:40.185389  320434 cri.go:89] found id: "9e2c562768d3ef40e165f634b295addfa5cb5274c101de18e0ab8490c95e8116"
	I1025 09:46:40.185416  320434 cri.go:89] found id: "a03995ac4fccce122a9c1002f2459b9981e66d7e2725203dabc6c68c7494cc34"
	I1025 09:46:40.185420  320434 cri.go:89] found id: "5a8d746e97640c6b363685a4b3c2c1a0914d9ab9fe85a43546d4e990f24b8958"
	I1025 09:46:40.185424  320434 cri.go:89] found id: "ad3565a09ce225576f9b373c237fdbcf567fad3f696a31c2511744550b317595"
	I1025 09:46:40.185427  320434 cri.go:89] found id: "7d005017d0bf28c57b242a453d705732d305644fa622751689bb381580ae0cc9"
	I1025 09:46:40.185429  320434 cri.go:89] found id: "6c4196345d8b07529fb0ecb3b623137f5db068aabe551ff91243a064f9e8040e"
	I1025 09:46:40.185432  320434 cri.go:89] found id: "e5339095bdcaff6188dc8b28e527f13dbdfc4e3e0f4daa38be912228b489c18d"
	I1025 09:46:40.185434  320434 cri.go:89] found id: ""
	I1025 09:46:40.185480  320434 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:46:40.197270  320434 retry.go:31] will retry after 412.587521ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:46:40Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:46:40.610714  320434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:46:40.624639  320434 pause.go:52] kubelet running: false
	I1025 09:46:40.624688  320434 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:46:40.766576  320434 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:46:40.766694  320434 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:46:40.838433  320434 cri.go:89] found id: "9e2c562768d3ef40e165f634b295addfa5cb5274c101de18e0ab8490c95e8116"
	I1025 09:46:40.838458  320434 cri.go:89] found id: "a03995ac4fccce122a9c1002f2459b9981e66d7e2725203dabc6c68c7494cc34"
	I1025 09:46:40.838463  320434 cri.go:89] found id: "5a8d746e97640c6b363685a4b3c2c1a0914d9ab9fe85a43546d4e990f24b8958"
	I1025 09:46:40.838467  320434 cri.go:89] found id: "ad3565a09ce225576f9b373c237fdbcf567fad3f696a31c2511744550b317595"
	I1025 09:46:40.838471  320434 cri.go:89] found id: "7d005017d0bf28c57b242a453d705732d305644fa622751689bb381580ae0cc9"
	I1025 09:46:40.838475  320434 cri.go:89] found id: "6c4196345d8b07529fb0ecb3b623137f5db068aabe551ff91243a064f9e8040e"
	I1025 09:46:40.838483  320434 cri.go:89] found id: "e5339095bdcaff6188dc8b28e527f13dbdfc4e3e0f4daa38be912228b489c18d"
	I1025 09:46:40.838487  320434 cri.go:89] found id: ""
	I1025 09:46:40.838550  320434 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:46:40.854212  320434 out.go:203] 
	W1025 09:46:40.855312  320434 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:46:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:46:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:46:40.855334  320434 out.go:285] * 
	* 
	W1025 09:46:40.859487  320434 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:46:40.860610  320434 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-175355 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-175355
helpers_test.go:243: (dbg) docker inspect pause-175355:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bcece3f1833d47c1da67a583e847364b43831bd48b6e1427eb943e19a98df22f",
	        "Created": "2025-10-25T09:45:55.716408665Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 307201,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:45:55.77808971Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/bcece3f1833d47c1da67a583e847364b43831bd48b6e1427eb943e19a98df22f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bcece3f1833d47c1da67a583e847364b43831bd48b6e1427eb943e19a98df22f/hostname",
	        "HostsPath": "/var/lib/docker/containers/bcece3f1833d47c1da67a583e847364b43831bd48b6e1427eb943e19a98df22f/hosts",
	        "LogPath": "/var/lib/docker/containers/bcece3f1833d47c1da67a583e847364b43831bd48b6e1427eb943e19a98df22f/bcece3f1833d47c1da67a583e847364b43831bd48b6e1427eb943e19a98df22f-json.log",
	        "Name": "/pause-175355",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-175355:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-175355",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bcece3f1833d47c1da67a583e847364b43831bd48b6e1427eb943e19a98df22f",
	                "LowerDir": "/var/lib/docker/overlay2/6c5b03bf38486330493419c0c0d808151f6bb05dd74d334b1f6e24b620a5a633-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c5b03bf38486330493419c0c0d808151f6bb05dd74d334b1f6e24b620a5a633/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c5b03bf38486330493419c0c0d808151f6bb05dd74d334b1f6e24b620a5a633/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c5b03bf38486330493419c0c0d808151f6bb05dd74d334b1f6e24b620a5a633/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-175355",
	                "Source": "/var/lib/docker/volumes/pause-175355/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-175355",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-175355",
	                "name.minikube.sigs.k8s.io": "pause-175355",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ec19cef6faa2bb6db0d5c71b916ef7eb9866bc2c78d8c426c7241e072ca5a1b7",
	            "SandboxKey": "/var/run/docker/netns/ec19cef6faa2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-175355": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:96:69:95:ed:59",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "271348bbdff7b7aa2cf9305be03ebc3eb0f9fd766abd7db6c833b6914d5e67a6",
	                    "EndpointID": "eea7e3aebdff7b4a923de7ff3ffd9ec4677b52c79ccde984e205756d864d55de",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-175355",
	                        "bcece3f1833d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-175355 -n pause-175355
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-175355 -n pause-175355: exit status 2 (366.066153ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-175355 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-175355 logs -n 25: (1.036583933s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-692905 --schedule 5m                                                                                      │ scheduled-stop-692905       │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │                     │
	│ stop    │ -p scheduled-stop-692905 --schedule 5m                                                                                      │ scheduled-stop-692905       │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │                     │
	│ stop    │ -p scheduled-stop-692905 --schedule 15s                                                                                     │ scheduled-stop-692905       │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │                     │
	│ stop    │ -p scheduled-stop-692905 --schedule 15s                                                                                     │ scheduled-stop-692905       │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │                     │
	│ stop    │ -p scheduled-stop-692905 --schedule 15s                                                                                     │ scheduled-stop-692905       │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │                     │
	│ stop    │ -p scheduled-stop-692905 --cancel-scheduled                                                                                 │ scheduled-stop-692905       │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │ 25 Oct 25 09:44 UTC │
	│ stop    │ -p scheduled-stop-692905 --schedule 15s                                                                                     │ scheduled-stop-692905       │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │                     │
	│ stop    │ -p scheduled-stop-692905 --schedule 15s                                                                                     │ scheduled-stop-692905       │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │                     │
	│ stop    │ -p scheduled-stop-692905 --schedule 15s                                                                                     │ scheduled-stop-692905       │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │ 25 Oct 25 09:45 UTC │
	│ delete  │ -p scheduled-stop-692905                                                                                                    │ scheduled-stop-692905       │ jenkins │ v1.37.0 │ 25 Oct 25 09:45 UTC │ 25 Oct 25 09:45 UTC │
	│ start   │ -p insufficient-storage-956587 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio            │ insufficient-storage-956587 │ jenkins │ v1.37.0 │ 25 Oct 25 09:45 UTC │                     │
	│ delete  │ -p insufficient-storage-956587                                                                                              │ insufficient-storage-956587 │ jenkins │ v1.37.0 │ 25 Oct 25 09:45 UTC │ 25 Oct 25 09:45 UTC │
	│ start   │ -p pause-175355 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-175355                │ jenkins │ v1.37.0 │ 25 Oct 25 09:45 UTC │ 25 Oct 25 09:46 UTC │
	│ start   │ -p cert-expiration-225615 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-225615      │ jenkins │ v1.37.0 │ 25 Oct 25 09:45 UTC │ 25 Oct 25 09:46 UTC │
	│ start   │ -p offline-crio-173316 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio           │ offline-crio-173316         │ jenkins │ v1.37.0 │ 25 Oct 25 09:45 UTC │ 25 Oct 25 09:46 UTC │
	│ start   │ -p NoKubernetes-617681 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio               │ NoKubernetes-617681         │ jenkins │ v1.37.0 │ 25 Oct 25 09:45 UTC │                     │
	│ start   │ -p NoKubernetes-617681 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                       │ NoKubernetes-617681         │ jenkins │ v1.37.0 │ 25 Oct 25 09:45 UTC │ 25 Oct 25 09:46 UTC │
	│ start   │ -p NoKubernetes-617681 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-617681         │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ delete  │ -p offline-crio-173316                                                                                                      │ offline-crio-173316         │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ delete  │ -p NoKubernetes-617681                                                                                                      │ NoKubernetes-617681         │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ start   │ -p pause-175355 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-175355                │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ start   │ -p NoKubernetes-617681 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-617681         │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ start   │ -p force-systemd-flag-170120 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-170120   │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │                     │
	│ pause   │ -p pause-175355 --alsologtostderr -v=5                                                                                      │ pause-175355                │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │                     │
	│ ssh     │ -p NoKubernetes-617681 sudo systemctl is-active --quiet service kubelet                                                     │ NoKubernetes-617681         │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:46:33
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:46:33.966162  317940 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:46:33.966428  317940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:46:33.966438  317940 out.go:374] Setting ErrFile to fd 2...
	I1025 09:46:33.966442  317940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:46:33.966670  317940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:46:33.967143  317940 out.go:368] Setting JSON to false
	I1025 09:46:33.968378  317940 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5338,"bootTime":1761380256,"procs":258,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:46:33.968565  317940 start.go:141] virtualization: kvm guest
	I1025 09:46:33.970159  317940 out.go:179] * [force-systemd-flag-170120] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:46:33.971341  317940 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:46:33.971340  317940 notify.go:220] Checking for updates...
	I1025 09:46:33.973460  317940 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:46:33.974528  317940 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:46:33.975534  317940 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:46:33.976606  317940 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:46:33.979894  317940 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:46:33.981360  317940 config.go:182] Loaded profile config "cert-expiration-225615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:46:33.981503  317940 config.go:182] Loaded profile config "pause-175355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:46:33.981596  317940 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:46:34.006150  317940 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:46:34.006296  317940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:46:34.070437  317940 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-25 09:46:34.060262263 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:46:34.070594  317940 docker.go:318] overlay module found
	I1025 09:46:34.073072  317940 out.go:179] * Using the docker driver based on user configuration
	I1025 09:46:34.074288  317940 start.go:305] selected driver: docker
	I1025 09:46:34.074319  317940 start.go:925] validating driver "docker" against <nil>
	I1025 09:46:34.074337  317940 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:46:34.074926  317940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:46:34.136051  317940 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-25 09:46:34.125341353 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:46:34.136237  317940 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:46:34.136478  317940 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 09:46:34.138053  317940 out.go:179] * Using Docker driver with root privileges
	I1025 09:46:34.139025  317940 cni.go:84] Creating CNI manager for ""
	I1025 09:46:34.139087  317940 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:46:34.139098  317940 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:46:34.139161  317940 start.go:349] cluster config:
	{Name:force-systemd-flag-170120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-170120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:46:34.140227  317940 out.go:179] * Starting "force-systemd-flag-170120" primary control-plane node in "force-systemd-flag-170120" cluster
	I1025 09:46:34.141125  317940 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:46:34.142208  317940 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:46:34.143319  317940 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:46:34.143362  317940 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:46:34.143375  317940 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:46:34.143391  317940 cache.go:58] Caching tarball of preloaded images
	I1025 09:46:34.143472  317940 preload.go:233] Found /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:46:34.143483  317940 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:46:34.143565  317940 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/force-systemd-flag-170120/config.json ...
	I1025 09:46:34.143582  317940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/force-systemd-flag-170120/config.json: {Name:mkfec303a44bf9939d06b48780c3a32e4337567e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:34.164614  317940 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:46:34.164646  317940 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:46:34.164666  317940 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:46:34.164702  317940 start.go:360] acquireMachinesLock for force-systemd-flag-170120: {Name:mk2131c503a45e338d8dbe5954a03f97b858783e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:46:34.164813  317940 start.go:364] duration metric: took 90.701µs to acquireMachinesLock for "force-systemd-flag-170120"
	I1025 09:46:34.164843  317940 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-170120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-170120 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:46:34.164929  317940 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:46:33.941721  317730 preload.go:183] Checking if preload exists for k8s version v0.0.0 and runtime crio
	I1025 09:46:33.941760  317730 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:46:33.965107  317730 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:46:33.965128  317730 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	W1025 09:46:34.269225  317730 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1025 09:46:34.287653  317730 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1025 09:46:34.287819  317730 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/NoKubernetes-617681/config.json ...
	I1025 09:46:34.287872  317730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/NoKubernetes-617681/config.json: {Name:mke4c1ba5b16c11429d9b7a3c3c5ca075bb38142 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:34.288037  317730 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:46:34.288077  317730 start.go:360] acquireMachinesLock for NoKubernetes-617681: {Name:mk55e5c71f2b935be2255dc6056c6bd549f8a5b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:46:34.288129  317730 start.go:364] duration metric: took 31.088µs to acquireMachinesLock for "NoKubernetes-617681"
	I1025 09:46:34.288146  317730 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-617681 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-617681 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:46:34.288225  317730 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:46:32.159100  316943 out.go:252] * Updating the running docker "pause-175355" container ...
	I1025 09:46:32.159132  316943 machine.go:93] provisionDockerMachine start ...
	I1025 09:46:32.159218  316943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-175355
	I1025 09:46:32.178619  316943 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:32.178843  316943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1025 09:46:32.178854  316943 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:46:32.321360  316943 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-175355
	
	I1025 09:46:32.321396  316943 ubuntu.go:182] provisioning hostname "pause-175355"
	I1025 09:46:32.321460  316943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-175355
	I1025 09:46:32.340237  316943 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:32.340475  316943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1025 09:46:32.340490  316943 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-175355 && echo "pause-175355" | sudo tee /etc/hostname
	I1025 09:46:32.529633  316943 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-175355
	
	I1025 09:46:32.529716  316943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-175355
	I1025 09:46:32.548442  316943 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:32.548659  316943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1025 09:46:32.548675  316943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-175355' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-175355/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-175355' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:46:32.687882  316943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:46:32.687917  316943 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 09:46:32.687953  316943 ubuntu.go:190] setting up certificates
	I1025 09:46:32.687962  316943 provision.go:84] configureAuth start
	I1025 09:46:32.688013  316943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-175355
	I1025 09:46:32.705706  316943 provision.go:143] copyHostCerts
	I1025 09:46:32.705767  316943 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem, removing ...
	I1025 09:46:32.705779  316943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:46:32.705853  316943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 09:46:32.705945  316943 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem, removing ...
	I1025 09:46:32.705954  316943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:46:32.705987  316943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 09:46:32.706039  316943 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem, removing ...
	I1025 09:46:32.706048  316943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:46:32.706077  316943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 09:46:32.706127  316943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.pause-175355 san=[127.0.0.1 192.168.85.2 localhost minikube pause-175355]
	I1025 09:46:33.144622  316943 provision.go:177] copyRemoteCerts
	I1025 09:46:33.144675  316943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:46:33.144717  316943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-175355
	I1025 09:46:33.163770  316943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/pause-175355/id_rsa Username:docker}
	I1025 09:46:33.267048  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:46:33.305869  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:46:33.327104  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 09:46:33.362413  316943 provision.go:87] duration metric: took 674.429895ms to configureAuth
	I1025 09:46:33.362446  316943 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:46:33.362719  316943 config.go:182] Loaded profile config "pause-175355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:46:33.362841  316943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-175355
	I1025 09:46:33.383946  316943 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:33.384228  316943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1025 09:46:33.384256  316943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:46:33.711290  316943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:46:33.711317  316943 machine.go:96] duration metric: took 1.552176673s to provisionDockerMachine
	I1025 09:46:33.711330  316943 start.go:293] postStartSetup for "pause-175355" (driver="docker")
	I1025 09:46:33.711362  316943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:46:33.711422  316943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:46:33.711479  316943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-175355
	I1025 09:46:33.731889  316943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/pause-175355/id_rsa Username:docker}
	I1025 09:46:33.837428  316943 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:46:33.842409  316943 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:46:33.842448  316943 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:46:33.842472  316943 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 09:46:33.842541  316943 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 09:46:33.842682  316943 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> 1341452.pem in /etc/ssl/certs
	I1025 09:46:33.842801  316943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:46:33.852191  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:46:33.871831  316943 start.go:296] duration metric: took 160.486562ms for postStartSetup
	I1025 09:46:33.871919  316943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:46:33.871967  316943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-175355
	I1025 09:46:33.893328  316943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/pause-175355/id_rsa Username:docker}
	I1025 09:46:34.001282  316943 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:46:34.006718  316943 fix.go:56] duration metric: took 1.867432854s for fixHost
	I1025 09:46:34.006748  316943 start.go:83] releasing machines lock for "pause-175355", held for 1.86748032s
	I1025 09:46:34.006807  316943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-175355
	I1025 09:46:34.025007  316943 ssh_runner.go:195] Run: cat /version.json
	I1025 09:46:34.025077  316943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-175355
	I1025 09:46:34.025155  316943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:46:34.025223  316943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-175355
	I1025 09:46:34.047229  316943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/pause-175355/id_rsa Username:docker}
	I1025 09:46:34.049402  316943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/pause-175355/id_rsa Username:docker}
	I1025 09:46:34.213618  316943 ssh_runner.go:195] Run: systemctl --version
	I1025 09:46:34.220434  316943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:46:34.262910  316943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:46:34.267719  316943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:46:34.267799  316943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:46:34.276139  316943 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:46:34.276171  316943 start.go:495] detecting cgroup driver to use...
	I1025 09:46:34.276203  316943 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:46:34.276252  316943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:46:34.293274  316943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:46:34.307568  316943 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:46:34.307633  316943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:46:34.324969  316943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:46:34.340292  316943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:46:34.484388  316943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:46:34.601154  316943 docker.go:234] disabling docker service ...
	I1025 09:46:34.601222  316943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:46:34.618747  316943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:46:34.639288  316943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:46:34.783913  316943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:46:34.941385  316943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:46:34.960847  316943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:46:34.985130  316943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:46:34.985532  316943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:35.005222  316943 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:46:35.005421  316943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:35.019304  316943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:35.030901  316943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:35.041992  316943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:46:35.051394  316943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:35.070375  316943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:35.080430  316943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:35.090702  316943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:46:35.099012  316943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:46:35.109159  316943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:46:35.253971  316943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:46:35.683756  316943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:46:35.683829  316943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:46:35.688157  316943 start.go:563] Will wait 60s for crictl version
	I1025 09:46:35.688222  316943 ssh_runner.go:195] Run: which crictl
	I1025 09:46:35.692767  316943 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:46:35.727936  316943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:46:35.728022  316943 ssh_runner.go:195] Run: crio --version
	I1025 09:46:35.761195  316943 ssh_runner.go:195] Run: crio --version
	I1025 09:46:35.801171  316943 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:46:35.802402  316943 cli_runner.go:164] Run: docker network inspect pause-175355 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:46:35.824758  316943 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 09:46:35.829654  316943 kubeadm.go:883] updating cluster {Name:pause-175355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-175355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:46:35.829843  316943 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:46:35.829897  316943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:46:35.866606  316943 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:46:35.866636  316943 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:46:35.866700  316943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:46:35.908939  316943 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:46:35.908967  316943 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:46:35.908977  316943 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 09:46:35.909111  316943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-175355 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-175355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:46:35.909248  316943 ssh_runner.go:195] Run: crio config
	I1025 09:46:35.973712  316943 cni.go:84] Creating CNI manager for ""
	I1025 09:46:35.973737  316943 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:46:35.973760  316943 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:46:35.973788  316943 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-175355 NodeName:pause-175355 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:46:35.973952  316943 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-175355"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:46:35.974033  316943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:46:35.982744  316943 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:46:35.982816  316943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:46:35.990887  316943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1025 09:46:36.005828  316943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:46:36.020758  316943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1025 09:46:36.034986  316943 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:46:36.039081  316943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:46:36.180761  316943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:46:36.194476  316943 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355 for IP: 192.168.85.2
	I1025 09:46:36.194502  316943 certs.go:195] generating shared ca certs ...
	I1025 09:46:36.194525  316943 certs.go:227] acquiring lock for ca certs: {Name:mk84f00dc0ba6e3a6eb84ff47b0ea60692217fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:36.194759  316943 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key
	I1025 09:46:36.194837  316943 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key
	I1025 09:46:36.194859  316943 certs.go:257] generating profile certs ...
	I1025 09:46:36.194976  316943 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/client.key
	I1025 09:46:36.195050  316943 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/apiserver.key.8c617dd2
	I1025 09:46:36.195130  316943 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/proxy-client.key
	I1025 09:46:36.195301  316943 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem (1338 bytes)
	W1025 09:46:36.195360  316943 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145_empty.pem, impossibly tiny 0 bytes
	I1025 09:46:36.195376  316943 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:46:36.195418  316943 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:46:36.195464  316943 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:46:36.195497  316943 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem (1675 bytes)
	I1025 09:46:36.195565  316943 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:46:36.196520  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:46:36.217261  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:46:36.235533  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:46:36.253574  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:46:36.271090  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 09:46:36.289145  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:46:36.308709  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:46:36.328016  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 09:46:36.346612  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:46:36.369662  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem --> /usr/share/ca-certificates/134145.pem (1338 bytes)
	I1025 09:46:36.389784  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /usr/share/ca-certificates/1341452.pem (1708 bytes)
	I1025 09:46:36.472333  316943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:46:36.486396  316943 ssh_runner.go:195] Run: openssl version
	I1025 09:46:36.493604  316943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:46:36.513649  316943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:46:36.517992  316943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:59 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:46:36.518047  316943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:46:36.556084  316943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:46:36.566489  316943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134145.pem && ln -fs /usr/share/ca-certificates/134145.pem /etc/ssl/certs/134145.pem"
	I1025 09:46:36.578972  316943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134145.pem
	I1025 09:46:36.585247  316943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:05 /usr/share/ca-certificates/134145.pem
	I1025 09:46:36.585321  316943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134145.pem
	I1025 09:46:36.635079  316943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134145.pem /etc/ssl/certs/51391683.0"
	I1025 09:46:36.645174  316943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1341452.pem && ln -fs /usr/share/ca-certificates/1341452.pem /etc/ssl/certs/1341452.pem"
	I1025 09:46:36.655676  316943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1341452.pem
	I1025 09:46:36.660390  316943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:05 /usr/share/ca-certificates/1341452.pem
	I1025 09:46:36.660459  316943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1341452.pem
	I1025 09:46:36.706250  316943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1341452.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:46:36.716687  316943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:46:36.721491  316943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:46:36.762118  316943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:46:36.807654  316943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:46:36.848379  316943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:46:36.904917  316943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:46:36.942499  316943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:46:36.980714  316943 kubeadm.go:400] StartCluster: {Name:pause-175355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-175355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:46:36.980841  316943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:46:36.980910  316943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:46:37.013827  316943 cri.go:89] found id: "9e2c562768d3ef40e165f634b295addfa5cb5274c101de18e0ab8490c95e8116"
	I1025 09:46:37.013850  316943 cri.go:89] found id: "a03995ac4fccce122a9c1002f2459b9981e66d7e2725203dabc6c68c7494cc34"
	I1025 09:46:37.013856  316943 cri.go:89] found id: "5a8d746e97640c6b363685a4b3c2c1a0914d9ab9fe85a43546d4e990f24b8958"
	I1025 09:46:37.013861  316943 cri.go:89] found id: "ad3565a09ce225576f9b373c237fdbcf567fad3f696a31c2511744550b317595"
	I1025 09:46:37.013864  316943 cri.go:89] found id: "7d005017d0bf28c57b242a453d705732d305644fa622751689bb381580ae0cc9"
	I1025 09:46:37.013869  316943 cri.go:89] found id: "6c4196345d8b07529fb0ecb3b623137f5db068aabe551ff91243a064f9e8040e"
	I1025 09:46:37.013873  316943 cri.go:89] found id: "e5339095bdcaff6188dc8b28e527f13dbdfc4e3e0f4daa38be912228b489c18d"
	I1025 09:46:37.013877  316943 cri.go:89] found id: ""
	I1025 09:46:37.013928  316943 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:46:37.027176  316943 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:46:37Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:46:37.027243  316943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:46:37.036265  316943 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:46:37.036289  316943 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:46:37.036338  316943 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:46:37.044405  316943 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:46:37.045255  316943 kubeconfig.go:125] found "pause-175355" server: "https://192.168.85.2:8443"
	I1025 09:46:37.046309  316943 kapi.go:59] client config for pause-175355: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/client.crt", KeyFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/client.key", CAFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:46:37.046926  316943 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1025 09:46:37.046949  316943 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 09:46:37.046958  316943 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1025 09:46:37.046970  316943 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1025 09:46:37.046977  316943 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 09:46:37.047472  316943 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:46:37.055761  316943 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 09:46:37.055798  316943 kubeadm.go:601] duration metric: took 19.50313ms to restartPrimaryControlPlane
	I1025 09:46:37.055807  316943 kubeadm.go:402] duration metric: took 75.112206ms to StartCluster
	I1025 09:46:37.055823  316943 settings.go:142] acquiring lock: {Name:mke1e64be0ec6edf2eef6e52eb10d83b59bb8c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:37.055903  316943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:46:37.056983  316943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:37.124958  316943 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:46:37.125103  316943 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:46:37.125210  316943 config.go:182] Loaded profile config "pause-175355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:46:37.129924  316943 out.go:179] * Verifying Kubernetes components...
	I1025 09:46:37.129936  316943 out.go:179] * Enabled addons: 
	I1025 09:46:34.290118  317730 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:46:34.290398  317730 start.go:159] libmachine.API.Create for "NoKubernetes-617681" (driver="docker")
	I1025 09:46:34.290459  317730 client.go:168] LocalClient.Create starting
	I1025 09:46:34.290522  317730 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem
	I1025 09:46:34.290560  317730 main.go:141] libmachine: Decoding PEM data...
	I1025 09:46:34.290581  317730 main.go:141] libmachine: Parsing certificate...
	I1025 09:46:34.290695  317730 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem
	I1025 09:46:34.290738  317730 main.go:141] libmachine: Decoding PEM data...
	I1025 09:46:34.290754  317730 main.go:141] libmachine: Parsing certificate...
	I1025 09:46:34.291239  317730 cli_runner.go:164] Run: docker network inspect NoKubernetes-617681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:46:34.312014  317730 cli_runner.go:211] docker network inspect NoKubernetes-617681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:46:34.312096  317730 network_create.go:284] running [docker network inspect NoKubernetes-617681] to gather additional debugging logs...
	I1025 09:46:34.312136  317730 cli_runner.go:164] Run: docker network inspect NoKubernetes-617681
	W1025 09:46:34.330954  317730 cli_runner.go:211] docker network inspect NoKubernetes-617681 returned with exit code 1
	I1025 09:46:34.330983  317730 network_create.go:287] error running [docker network inspect NoKubernetes-617681]: docker network inspect NoKubernetes-617681: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network NoKubernetes-617681 not found
	I1025 09:46:34.330995  317730 network_create.go:289] output of [docker network inspect NoKubernetes-617681]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network NoKubernetes-617681 not found
	
	** /stderr **
	I1025 09:46:34.331081  317730 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:46:34.351886  317730 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b89a58b7fce0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:e2:93:21:98:bc} reservation:<nil>}
	I1025 09:46:34.352685  317730 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4482374e86a6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:20:65:c1:4a:19} reservation:<nil>}
	I1025 09:46:34.353316  317730 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7323bc384751 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:33:7f:07:f5:30} reservation:<nil>}
	I1025 09:46:34.353982  317730 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b7a1ea657c41 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0e:10:ed:26:f0:49} reservation:<nil>}
	I1025 09:46:34.354688  317730 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-271348bbdff7 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:4a:9e:f5:f5:c8:7a} reservation:<nil>}
	I1025 09:46:34.355492  317730 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-b7e4f9cc4b1b IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:da:32:85:b6:c0:99} reservation:<nil>}
	I1025 09:46:34.356188  317730 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea0f30}
	I1025 09:46:34.356217  317730 network_create.go:124] attempt to create docker network NoKubernetes-617681 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1025 09:46:34.356260  317730 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-617681 NoKubernetes-617681
	I1025 09:46:34.429499  317730 network_create.go:108] docker network NoKubernetes-617681 192.168.103.0/24 created
	I1025 09:46:34.429529  317730 kic.go:121] calculated static IP "192.168.103.2" for the "NoKubernetes-617681" container
	I1025 09:46:34.429595  317730 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:46:34.448285  317730 cli_runner.go:164] Run: docker volume create NoKubernetes-617681 --label name.minikube.sigs.k8s.io=NoKubernetes-617681 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:46:34.466046  317730 oci.go:103] Successfully created a docker volume NoKubernetes-617681
	I1025 09:46:34.466145  317730 cli_runner.go:164] Run: docker run --rm --name NoKubernetes-617681-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-617681 --entrypoint /usr/bin/test -v NoKubernetes-617681:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:46:34.941735  317730 oci.go:107] Successfully prepared a docker volume NoKubernetes-617681
	I1025 09:46:34.941778  317730 preload.go:183] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W1025 09:46:34.941889  317730 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:46:34.941922  317730 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:46:34.942110  317730 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:46:35.020795  317730 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname NoKubernetes-617681 --name NoKubernetes-617681 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-617681 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=NoKubernetes-617681 --network NoKubernetes-617681 --ip 192.168.103.2 --volume NoKubernetes-617681:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:46:35.536657  317730 cli_runner.go:164] Run: docker container inspect NoKubernetes-617681 --format={{.State.Running}}
	I1025 09:46:35.558892  317730 cli_runner.go:164] Run: docker container inspect NoKubernetes-617681 --format={{.State.Status}}
	I1025 09:46:35.581909  317730 cli_runner.go:164] Run: docker exec NoKubernetes-617681 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:46:35.638026  317730 oci.go:144] the created container "NoKubernetes-617681" has a running status.
	I1025 09:46:35.638074  317730 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/NoKubernetes-617681/id_rsa...
	I1025 09:46:36.545559  317730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/NoKubernetes-617681/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1025 09:46:36.545659  317730 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21794-130604/.minikube/machines/NoKubernetes-617681/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:46:36.578585  317730 cli_runner.go:164] Run: docker container inspect NoKubernetes-617681 --format={{.State.Status}}
	I1025 09:46:36.603599  317730 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:46:36.603630  317730 kic_runner.go:114] Args: [docker exec --privileged NoKubernetes-617681 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:46:36.657939  317730 cli_runner.go:164] Run: docker container inspect NoKubernetes-617681 --format={{.State.Status}}
	I1025 09:46:36.680157  317730 machine.go:93] provisionDockerMachine start ...
	I1025 09:46:36.680263  317730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-617681
	I1025 09:46:36.702944  317730 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:36.703324  317730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1025 09:46:36.703361  317730 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:46:36.853535  317730 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-617681
	
	I1025 09:46:36.853566  317730 ubuntu.go:182] provisioning hostname "NoKubernetes-617681"
	I1025 09:46:36.853633  317730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-617681
	I1025 09:46:36.874492  317730 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:36.874836  317730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1025 09:46:36.874858  317730 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-617681 && echo "NoKubernetes-617681" | sudo tee /etc/hostname
	I1025 09:46:37.126643  317730 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-617681
	
	I1025 09:46:37.126719  317730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-617681
	I1025 09:46:37.147411  317730 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:37.147692  317730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1025 09:46:37.147719  317730 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-617681' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-617681/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-617681' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:46:37.298384  317730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:46:37.298414  317730 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 09:46:37.298441  317730 ubuntu.go:190] setting up certificates
	I1025 09:46:37.298454  317730 provision.go:84] configureAuth start
	I1025 09:46:37.298507  317730 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-617681
	I1025 09:46:37.319768  317730 provision.go:143] copyHostCerts
	I1025 09:46:37.319817  317730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:46:37.319858  317730 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem, removing ...
	I1025 09:46:37.319879  317730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:46:37.319968  317730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 09:46:37.320067  317730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:46:37.320097  317730 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem, removing ...
	I1025 09:46:37.320108  317730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:46:37.320150  317730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 09:46:37.320217  317730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:46:37.320244  317730 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem, removing ...
	I1025 09:46:37.320254  317730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:46:37.320292  317730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 09:46:37.320396  317730 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-617681 san=[127.0.0.1 192.168.103.2 NoKubernetes-617681 localhost minikube]
	I1025 09:46:37.529684  317730 provision.go:177] copyRemoteCerts
	I1025 09:46:37.529746  317730 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:46:37.529787  317730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-617681
	I1025 09:46:37.551104  317730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/NoKubernetes-617681/id_rsa Username:docker}
	I1025 09:46:37.654222  317730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 09:46:37.654281  317730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:46:37.676402  317730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 09:46:37.676480  317730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 09:46:37.694310  317730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 09:46:37.694400  317730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:46:37.712074  317730 provision.go:87] duration metric: took 413.605345ms to configureAuth
	I1025 09:46:37.712104  317730 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:46:37.712282  317730 config.go:182] Loaded profile config "NoKubernetes-617681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1025 09:46:37.712420  317730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-617681
	I1025 09:46:37.731037  317730 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:37.731244  317730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1025 09:46:37.731259  317730 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:46:38.240908  317730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:46:38.240941  317730 machine.go:96] duration metric: took 1.560757558s to provisionDockerMachine
	I1025 09:46:38.240953  317730 client.go:171] duration metric: took 3.950482093s to LocalClient.Create
	I1025 09:46:38.240974  317730 start.go:167] duration metric: took 3.950586803s to libmachine.API.Create "NoKubernetes-617681"
	I1025 09:46:38.240981  317730 start.go:293] postStartSetup for "NoKubernetes-617681" (driver="docker")
	I1025 09:46:38.240990  317730 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:46:38.241053  317730 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:46:38.241101  317730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-617681
	I1025 09:46:38.258476  317730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/NoKubernetes-617681/id_rsa Username:docker}
	I1025 09:46:38.592659  317730 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:46:38.596864  317730 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:46:38.596895  317730 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:46:38.596909  317730 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 09:46:38.596979  317730 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 09:46:38.597055  317730 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> 1341452.pem in /etc/ssl/certs
	I1025 09:46:38.597065  317730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> /etc/ssl/certs/1341452.pem
	I1025 09:46:38.597144  317730 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:46:38.605225  317730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:46:38.626381  317730 start.go:296] duration metric: took 385.387015ms for postStartSetup
	I1025 09:46:38.626741  317730 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-617681
	I1025 09:46:38.645791  317730 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/NoKubernetes-617681/config.json ...
	I1025 09:46:38.646143  317730 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:46:38.646204  317730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-617681
	I1025 09:46:38.666462  317730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/NoKubernetes-617681/id_rsa Username:docker}
	I1025 09:46:34.169464  317940 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:46:34.169712  317940 start.go:159] libmachine.API.Create for "force-systemd-flag-170120" (driver="docker")
	I1025 09:46:34.169747  317940 client.go:168] LocalClient.Create starting
	I1025 09:46:34.169837  317940 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem
	I1025 09:46:34.169885  317940 main.go:141] libmachine: Decoding PEM data...
	I1025 09:46:34.169913  317940 main.go:141] libmachine: Parsing certificate...
	I1025 09:46:34.170001  317940 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem
	I1025 09:46:34.170034  317940 main.go:141] libmachine: Decoding PEM data...
	I1025 09:46:34.170048  317940 main.go:141] libmachine: Parsing certificate...
	I1025 09:46:34.170446  317940 cli_runner.go:164] Run: docker network inspect force-systemd-flag-170120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:46:34.188296  317940 cli_runner.go:211] docker network inspect force-systemd-flag-170120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:46:34.188403  317940 network_create.go:284] running [docker network inspect force-systemd-flag-170120] to gather additional debugging logs...
	I1025 09:46:34.188430  317940 cli_runner.go:164] Run: docker network inspect force-systemd-flag-170120
	W1025 09:46:34.204892  317940 cli_runner.go:211] docker network inspect force-systemd-flag-170120 returned with exit code 1
	I1025 09:46:34.204922  317940 network_create.go:287] error running [docker network inspect force-systemd-flag-170120]: docker network inspect force-systemd-flag-170120: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-170120 not found
	I1025 09:46:34.204938  317940 network_create.go:289] output of [docker network inspect force-systemd-flag-170120]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-170120 not found
	
	** /stderr **
	I1025 09:46:34.205063  317940 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:46:34.224591  317940 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b89a58b7fce0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:e2:93:21:98:bc} reservation:<nil>}
	I1025 09:46:34.225267  317940 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4482374e86a6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:20:65:c1:4a:19} reservation:<nil>}
	I1025 09:46:34.225953  317940 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7323bc384751 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:33:7f:07:f5:30} reservation:<nil>}
	I1025 09:46:34.226700  317940 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b7a1ea657c41 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0e:10:ed:26:f0:49} reservation:<nil>}
	I1025 09:46:34.227504  317940 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-271348bbdff7 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:4a:9e:f5:f5:c8:7a} reservation:<nil>}
	I1025 09:46:34.228432  317940 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e66ac0}
	I1025 09:46:34.228457  317940 network_create.go:124] attempt to create docker network force-systemd-flag-170120 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1025 09:46:34.228502  317940 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-170120 force-systemd-flag-170120
	I1025 09:46:34.293561  317940 network_create.go:108] docker network force-systemd-flag-170120 192.168.94.0/24 created
	I1025 09:46:34.293612  317940 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-flag-170120" container
	I1025 09:46:34.293704  317940 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:46:34.314078  317940 cli_runner.go:164] Run: docker volume create force-systemd-flag-170120 --label name.minikube.sigs.k8s.io=force-systemd-flag-170120 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:46:34.333548  317940 oci.go:103] Successfully created a docker volume force-systemd-flag-170120
	I1025 09:46:34.333701  317940 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-170120-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-170120 --entrypoint /usr/bin/test -v force-systemd-flag-170120:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:46:34.798380  317940 oci.go:107] Successfully prepared a docker volume force-systemd-flag-170120
	I1025 09:46:34.798431  317940 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:46:34.798459  317940 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:46:34.798543  317940 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-170120:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 09:46:38.625886  317940 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-170120:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (3.827298561s)
	I1025 09:46:38.625934  317940 kic.go:203] duration metric: took 3.827474108s to extract preloaded images to volume ...
	W1025 09:46:38.626042  317940 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:46:38.626090  317940 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:46:38.626146  317940 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:46:38.686830  317940 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-170120 --name force-systemd-flag-170120 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-170120 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-170120 --network force-systemd-flag-170120 --ip 192.168.94.2 --volume force-systemd-flag-170120:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:46:37.132157  316943 addons.go:514] duration metric: took 7.065139ms for enable addons: enabled=[]
	I1025 09:46:37.132192  316943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:46:37.260389  316943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:46:37.273327  316943 node_ready.go:35] waiting up to 6m0s for node "pause-175355" to be "Ready" ...
	I1025 09:46:37.281111  316943 node_ready.go:49] node "pause-175355" is "Ready"
	I1025 09:46:37.281138  316943 node_ready.go:38] duration metric: took 7.758092ms for node "pause-175355" to be "Ready" ...
	I1025 09:46:37.281154  316943 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:46:37.281208  316943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:46:37.293084  316943 api_server.go:72] duration metric: took 168.068963ms to wait for apiserver process to appear ...
	I1025 09:46:37.293111  316943 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:46:37.293138  316943 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:46:37.298267  316943 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 09:46:37.299435  316943 api_server.go:141] control plane version: v1.34.1
	I1025 09:46:37.299464  316943 api_server.go:131] duration metric: took 6.344829ms to wait for apiserver health ...
	I1025 09:46:37.299475  316943 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:46:37.302903  316943 system_pods.go:59] 7 kube-system pods found
	I1025 09:46:37.302939  316943 system_pods.go:61] "coredns-66bc5c9577-s4rnp" [22bdeb9a-e672-4e75-8488-afecc2d96283] Running
	I1025 09:46:37.302945  316943 system_pods.go:61] "etcd-pause-175355" [14fe9006-b376-4914-98ab-fd22e19d6f99] Running
	I1025 09:46:37.302949  316943 system_pods.go:61] "kindnet-zb6d9" [c5542753-32da-4749-bfdd-948d337adf13] Running
	I1025 09:46:37.302953  316943 system_pods.go:61] "kube-apiserver-pause-175355" [b3d3ef22-7ef5-44da-8a1c-a189970e788f] Running
	I1025 09:46:37.302956  316943 system_pods.go:61] "kube-controller-manager-pause-175355" [47a8931d-33a9-48e3-b96e-8967bddc533d] Running
	I1025 09:46:37.302958  316943 system_pods.go:61] "kube-proxy-cvr5p" [a3a2e3dc-2eb5-4b07-90c7-5b06ad4dc480] Running
	I1025 09:46:37.302961  316943 system_pods.go:61] "kube-scheduler-pause-175355" [37300eb1-351d-48cc-973b-b7554cf78b07] Running
	I1025 09:46:37.302966  316943 system_pods.go:74] duration metric: took 3.485271ms to wait for pod list to return data ...
	I1025 09:46:37.302976  316943 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:46:37.304838  316943 default_sa.go:45] found service account: "default"
	I1025 09:46:37.304861  316943 default_sa.go:55] duration metric: took 1.87752ms for default service account to be created ...
	I1025 09:46:37.304872  316943 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:46:37.307661  316943 system_pods.go:86] 7 kube-system pods found
	I1025 09:46:37.307685  316943 system_pods.go:89] "coredns-66bc5c9577-s4rnp" [22bdeb9a-e672-4e75-8488-afecc2d96283] Running
	I1025 09:46:37.307690  316943 system_pods.go:89] "etcd-pause-175355" [14fe9006-b376-4914-98ab-fd22e19d6f99] Running
	I1025 09:46:37.307694  316943 system_pods.go:89] "kindnet-zb6d9" [c5542753-32da-4749-bfdd-948d337adf13] Running
	I1025 09:46:37.307697  316943 system_pods.go:89] "kube-apiserver-pause-175355" [b3d3ef22-7ef5-44da-8a1c-a189970e788f] Running
	I1025 09:46:37.307700  316943 system_pods.go:89] "kube-controller-manager-pause-175355" [47a8931d-33a9-48e3-b96e-8967bddc533d] Running
	I1025 09:46:37.307713  316943 system_pods.go:89] "kube-proxy-cvr5p" [a3a2e3dc-2eb5-4b07-90c7-5b06ad4dc480] Running
	I1025 09:46:37.307720  316943 system_pods.go:89] "kube-scheduler-pause-175355" [37300eb1-351d-48cc-973b-b7554cf78b07] Running
	I1025 09:46:37.307729  316943 system_pods.go:126] duration metric: took 2.849429ms to wait for k8s-apps to be running ...
	I1025 09:46:37.307742  316943 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:46:37.307794  316943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:46:37.322874  316943 system_svc.go:56] duration metric: took 15.115051ms WaitForService to wait for kubelet
	I1025 09:46:37.322905  316943 kubeadm.go:586] duration metric: took 197.895327ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:46:37.322927  316943 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:46:37.325681  316943 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:46:37.325704  316943 node_conditions.go:123] node cpu capacity is 8
	I1025 09:46:37.325715  316943 node_conditions.go:105] duration metric: took 2.782916ms to run NodePressure ...
	I1025 09:46:37.325730  316943 start.go:241] waiting for startup goroutines ...
	I1025 09:46:37.325739  316943 start.go:246] waiting for cluster config update ...
	I1025 09:46:37.325750  316943 start.go:255] writing updated cluster config ...
	I1025 09:46:37.326046  316943 ssh_runner.go:195] Run: rm -f paused
	I1025 09:46:37.329972  316943 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:46:37.330745  316943 kapi.go:59] client config for pause-175355: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/client.crt", KeyFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/client.key", CAFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:46:37.333496  316943 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s4rnp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:37.337507  316943 pod_ready.go:94] pod "coredns-66bc5c9577-s4rnp" is "Ready"
	I1025 09:46:37.337529  316943 pod_ready.go:86] duration metric: took 4.010415ms for pod "coredns-66bc5c9577-s4rnp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:37.339470  316943 pod_ready.go:83] waiting for pod "etcd-pause-175355" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:37.343242  316943 pod_ready.go:94] pod "etcd-pause-175355" is "Ready"
	I1025 09:46:37.343268  316943 pod_ready.go:86] duration metric: took 3.776089ms for pod "etcd-pause-175355" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:37.345158  316943 pod_ready.go:83] waiting for pod "kube-apiserver-pause-175355" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:37.348985  316943 pod_ready.go:94] pod "kube-apiserver-pause-175355" is "Ready"
	I1025 09:46:37.349007  316943 pod_ready.go:86] duration metric: took 3.827259ms for pod "kube-apiserver-pause-175355" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:37.351068  316943 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-175355" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:37.734479  316943 pod_ready.go:94] pod "kube-controller-manager-pause-175355" is "Ready"
	I1025 09:46:37.734503  316943 pod_ready.go:86] duration metric: took 383.41353ms for pod "kube-controller-manager-pause-175355" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:37.934657  316943 pod_ready.go:83] waiting for pod "kube-proxy-cvr5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:38.334715  316943 pod_ready.go:94] pod "kube-proxy-cvr5p" is "Ready"
	I1025 09:46:38.334743  316943 pod_ready.go:86] duration metric: took 400.060172ms for pod "kube-proxy-cvr5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:38.534755  316943 pod_ready.go:83] waiting for pod "kube-scheduler-pause-175355" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:38.934748  316943 pod_ready.go:94] pod "kube-scheduler-pause-175355" is "Ready"
	I1025 09:46:38.934778  316943 pod_ready.go:86] duration metric: took 399.995258ms for pod "kube-scheduler-pause-175355" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:38.934793  316943 pod_ready.go:40] duration metric: took 1.604772098s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:46:38.984662  316943 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:46:38.986472  316943 out.go:179] * Done! kubectl is now configured to use "pause-175355" cluster and "default" namespace by default
	I1025 09:46:38.767646  317730 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:46:38.772183  317730 start.go:128] duration metric: took 4.483941993s to createHost
	I1025 09:46:38.772212  317730 start.go:83] releasing machines lock for "NoKubernetes-617681", held for 4.484072285s
	I1025 09:46:38.772280  317730 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-617681
	I1025 09:46:38.790316  317730 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:46:38.790389  317730 ssh_runner.go:195] Run: cat /version.json
	I1025 09:46:38.790434  317730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-617681
	I1025 09:46:38.790434  317730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-617681
	I1025 09:46:38.812040  317730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/NoKubernetes-617681/id_rsa Username:docker}
	I1025 09:46:38.813438  317730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/NoKubernetes-617681/id_rsa Username:docker}
	I1025 09:46:38.984217  317730 ssh_runner.go:195] Run: systemctl --version
	I1025 09:46:38.992234  317730 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:46:39.038661  317730 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:46:39.045030  317730 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:46:39.045097  317730 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:46:39.077984  317730 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:46:39.078035  317730 start.go:495] detecting cgroup driver to use...
	I1025 09:46:39.078071  317730 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:46:39.078143  317730 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:46:39.096402  317730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:46:39.112482  317730 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:46:39.112531  317730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:46:39.138082  317730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:46:39.165737  317730 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:46:39.286434  317730 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:46:39.414508  317730 docker.go:234] disabling docker service ...
	I1025 09:46:39.414584  317730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:46:39.440541  317730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:46:39.458237  317730 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:46:39.566627  317730 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:46:39.668340  317730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:46:39.681578  317730 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:46:39.696659  317730 download.go:108] Downloading: https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm.sha1 -> /home/jenkins/minikube-integration/21794-130604/.minikube/cache/linux/amd64/v0.0.0/kubeadm
	I1025 09:46:40.225530  317730 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1025 09:46:40.225590  317730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:40.236781  317730 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:46:40.236903  317730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:40.246075  317730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:40.254825  317730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:40.263707  317730 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:46:40.271656  317730 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:46:40.279166  317730 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:46:40.286492  317730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:46:40.364447  317730 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:46:40.466948  317730 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:46:40.467007  317730 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:46:40.471122  317730 start.go:563] Will wait 60s for crictl version
	I1025 09:46:40.471199  317730 ssh_runner.go:195] Run: which crictl
	I1025 09:46:40.474838  317730 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:46:40.499313  317730 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:46:40.499437  317730 ssh_runner.go:195] Run: crio --version
	I1025 09:46:40.528755  317730 ssh_runner.go:195] Run: crio --version
	I1025 09:46:40.559677  317730 out.go:179] * Preparing CRI-O 1.34.1 ...
	I1025 09:46:40.560846  317730 ssh_runner.go:195] Run: rm -f paused
	I1025 09:46:40.566019  317730 out.go:179] * Done! minikube is ready without Kubernetes!
	I1025 09:46:40.568576  317730 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	
	
	==> CRI-O <==
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.618933768Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.619886913Z" level=info msg="Conmon does support the --sync option"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.619907706Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.619920721Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.620656225Z" level=info msg="Conmon does support the --sync option"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.620669023Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.624685856Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.624724417Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.625680719Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.626884527Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.626955323Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.63356263Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.679065046Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-s4rnp Namespace:kube-system ID:d3362595f8ac943d418bc941dc43e579c44004f86352196d05c1490003346646 UID:22bdeb9a-e672-4e75-8488-afecc2d96283 NetNS:/var/run/netns/5b4313e0-d83b-4050-b281-04eb70ba2214 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a358}] Aliases:map[]}"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.679238888Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-s4rnp for CNI network kindnet (type=ptp)"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.67969442Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.679724374Z" level=info msg="Starting seccomp notifier watcher"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.679775387Z" level=info msg="Create NRI interface"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.679910071Z" level=info msg="built-in NRI default validator is disabled"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.679922595Z" level=info msg="runtime interface created"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.679935649Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.679943542Z" level=info msg="runtime interface starting up..."
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.679950985Z" level=info msg="starting plugins..."
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.679965945Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.680246724Z" level=info msg="No systemd watchdog enabled"
	Oct 25 09:46:35 pause-175355 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	9e2c562768d3e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   0                   d3362595f8ac9       coredns-66bc5c9577-s4rnp               kube-system
	a03995ac4fccc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   23 seconds ago      Running             kindnet-cni               0                   548e91cfdb6a5       kindnet-zb6d9                          kube-system
	5a8d746e97640       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   23 seconds ago      Running             kube-proxy                0                   f31338de3607b       kube-proxy-cvr5p                       kube-system
	ad3565a09ce22       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   34 seconds ago      Running             kube-apiserver            0                   bf58aa99a937a       kube-apiserver-pause-175355            kube-system
	7d005017d0bf2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   34 seconds ago      Running             kube-controller-manager   0                   204b2441428e2       kube-controller-manager-pause-175355   kube-system
	6c4196345d8b0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   34 seconds ago      Running             etcd                      0                   a01625675176d       etcd-pause-175355                      kube-system
	e5339095bdcaf       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   34 seconds ago      Running             kube-scheduler            0                   cc38b7d321f0c       kube-scheduler-pause-175355            kube-system
	
	
	==> coredns [9e2c562768d3ef40e165f634b295addfa5cb5274c101de18e0ab8490c95e8116] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33829 - 49078 "HINFO IN 7629899892878268267.4588495436628105644. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069677976s
	
	
	==> describe nodes <==
	Name:               pause-175355
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-175355
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=pause-175355
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_46_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:46:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-175355
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:46:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:46:33 +0000   Sat, 25 Oct 2025 09:46:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:46:33 +0000   Sat, 25 Oct 2025 09:46:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:46:33 +0000   Sat, 25 Oct 2025 09:46:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:46:33 +0000   Sat, 25 Oct 2025 09:46:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-175355
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                c92ea8b4-95fb-49fa-ad4c-542f024d133c
	  Boot ID:                    69cac88c-fbae-449a-9884-8eb99653f5b9
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://Unknown
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-s4rnp                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-pause-175355                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-zb6d9                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-pause-175355             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-pause-175355    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-cvr5p                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-pause-175355             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22s                kube-proxy       
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node pause-175355 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node pause-175355 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x8 over 35s)  kubelet          Node pause-175355 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s                kubelet          Node pause-175355 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                kubelet          Node pause-175355 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                kubelet          Node pause-175355 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node pause-175355 event: Registered Node pause-175355 in Controller
	  Normal  NodeReady                13s                kubelet          Node pause-175355 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 1c f5 68 9f 00 08 06
	[  +4.451388] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0e 07 4a e3 be 93 08 06
	[Oct25 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.025995] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.023888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.023905] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.024896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.022924] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +2.047850] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +4.031640] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +8.511323] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[ +16.382644] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[Oct25 09:03] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	
	
	==> etcd [6c4196345d8b07529fb0ecb3b623137f5db068aabe551ff91243a064f9e8040e] <==
	{"level":"warn","ts":"2025-10-25T09:46:09.414012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.434497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.479648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.488509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.506691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.524989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.542962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.555970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.574163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.592521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.612609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.624494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.636675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.648334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.665335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.683495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.691593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.707718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.722326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.737064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.744869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.762858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.775697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.789032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.855692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57450","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:46:42 up  1:29,  0 user,  load average: 4.56, 1.62, 1.14
	Linux pause-175355 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a03995ac4fccce122a9c1002f2459b9981e66d7e2725203dabc6c68c7494cc34] <==
	I1025 09:46:19.071336       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:46:19.071674       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 09:46:19.071801       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:46:19.071816       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:46:19.071835       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:46:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:46:19.272096       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:46:19.272164       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:46:19.272190       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:46:19.272861       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:46:19.622934       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:46:19.623080       1 metrics.go:72] Registering metrics
	I1025 09:46:19.623146       1 controller.go:711] "Syncing nftables rules"
	I1025 09:46:29.274048       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:46:29.274105       1 main.go:301] handling current node
	I1025 09:46:39.276482       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:46:39.276535       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ad3565a09ce225576f9b373c237fdbcf567fad3f696a31c2511744550b317595] <==
	I1025 09:46:10.442898       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:46:10.443783       1 controller.go:667] quota admission added evaluator for: namespaces
	E1025 09:46:10.444666       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1025 09:46:10.446473       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:46:10.446575       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 09:46:10.456749       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:46:10.457601       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:46:10.647701       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:46:11.343737       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 09:46:11.347339       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 09:46:11.347374       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:46:11.852233       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:46:11.892631       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:46:11.951774       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 09:46:11.961428       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1025 09:46:11.962721       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:46:11.968608       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:46:12.376411       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:46:13.063244       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:46:13.080774       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 09:46:13.100421       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:46:18.028165       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:46:18.078788       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:46:18.084446       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:46:18.427107       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [7d005017d0bf28c57b242a453d705732d305644fa622751689bb381580ae0cc9] <==
	I1025 09:46:17.373277       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:46:17.373613       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:46:17.373718       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 09:46:17.373780       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:46:17.373849       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:46:17.373617       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 09:46:17.373992       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:46:17.374256       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:46:17.374276       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:46:17.374290       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:46:17.375656       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:46:17.380655       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:46:17.380696       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:46:17.388688       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:46:17.391900       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:46:17.393430       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:46:17.396570       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:46:17.403801       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:46:17.411441       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:46:17.422275       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:46:17.422411       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:46:17.423271       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:46:17.423301       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:46:17.428960       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:46:32.375185       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5a8d746e97640c6b363685a4b3c2c1a0914d9ab9fe85a43546d4e990f24b8958] <==
	I1025 09:46:18.870468       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:46:18.931708       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:46:19.032455       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:46:19.032492       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 09:46:19.032610       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:46:19.052286       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:46:19.052342       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:46:19.057679       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:46:19.057996       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:46:19.058023       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:46:19.059096       1 config.go:200] "Starting service config controller"
	I1025 09:46:19.059119       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:46:19.059129       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:46:19.059139       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:46:19.059226       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:46:19.059250       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:46:19.059278       1 config.go:309] "Starting node config controller"
	I1025 09:46:19.059293       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:46:19.059304       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:46:19.159283       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:46:19.159408       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:46:19.159407       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e5339095bdcaff6188dc8b28e527f13dbdfc4e3e0f4daa38be912228b489c18d] <==
	E1025 09:46:10.420758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:46:10.420862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:46:10.420906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:46:10.420985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:46:10.420990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:46:10.421080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:46:10.421155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:46:10.420050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:46:10.421397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:46:10.421859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:46:10.422090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:46:10.422502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:46:10.422567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:46:11.284890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:46:11.312434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:46:11.388739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:46:11.396990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:46:11.444169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:46:11.461599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:46:11.473914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:46:11.512037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:46:11.519519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:46:11.661037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:46:11.716816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1025 09:46:13.716046       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:46:13 pause-175355 kubelet[1296]: E1025 09:46:13.973555    1296 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-175355\" already exists" pod="kube-system/kube-apiserver-pause-175355"
	Oct 25 09:46:13 pause-175355 kubelet[1296]: E1025 09:46:13.974522    1296 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-175355\" already exists" pod="kube-system/kube-controller-manager-pause-175355"
	Oct 25 09:46:13 pause-175355 kubelet[1296]: I1025 09:46:13.984139    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-175355" podStartSLOduration=0.984115342 podStartE2EDuration="984.115342ms" podCreationTimestamp="2025-10-25 09:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:46:13.973587229 +0000 UTC m=+1.152177649" watchObservedRunningTime="2025-10-25 09:46:13.984115342 +0000 UTC m=+1.162705755"
	Oct 25 09:46:17 pause-175355 kubelet[1296]: I1025 09:46:17.395980    1296 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 09:46:17 pause-175355 kubelet[1296]: I1025 09:46:17.397309    1296 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 09:46:18 pause-175355 kubelet[1296]: I1025 09:46:18.539610    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3a2e3dc-2eb5-4b07-90c7-5b06ad4dc480-lib-modules\") pod \"kube-proxy-cvr5p\" (UID: \"a3a2e3dc-2eb5-4b07-90c7-5b06ad4dc480\") " pod="kube-system/kube-proxy-cvr5p"
	Oct 25 09:46:18 pause-175355 kubelet[1296]: I1025 09:46:18.539653    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8czsd\" (UniqueName: \"kubernetes.io/projected/c5542753-32da-4749-bfdd-948d337adf13-kube-api-access-8czsd\") pod \"kindnet-zb6d9\" (UID: \"c5542753-32da-4749-bfdd-948d337adf13\") " pod="kube-system/kindnet-zb6d9"
	Oct 25 09:46:18 pause-175355 kubelet[1296]: I1025 09:46:18.539673    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5542753-32da-4749-bfdd-948d337adf13-lib-modules\") pod \"kindnet-zb6d9\" (UID: \"c5542753-32da-4749-bfdd-948d337adf13\") " pod="kube-system/kindnet-zb6d9"
	Oct 25 09:46:18 pause-175355 kubelet[1296]: I1025 09:46:18.539688    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a3a2e3dc-2eb5-4b07-90c7-5b06ad4dc480-kube-proxy\") pod \"kube-proxy-cvr5p\" (UID: \"a3a2e3dc-2eb5-4b07-90c7-5b06ad4dc480\") " pod="kube-system/kube-proxy-cvr5p"
	Oct 25 09:46:18 pause-175355 kubelet[1296]: I1025 09:46:18.539705    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3a2e3dc-2eb5-4b07-90c7-5b06ad4dc480-xtables-lock\") pod \"kube-proxy-cvr5p\" (UID: \"a3a2e3dc-2eb5-4b07-90c7-5b06ad4dc480\") " pod="kube-system/kube-proxy-cvr5p"
	Oct 25 09:46:18 pause-175355 kubelet[1296]: I1025 09:46:18.539722    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qtfk\" (UniqueName: \"kubernetes.io/projected/a3a2e3dc-2eb5-4b07-90c7-5b06ad4dc480-kube-api-access-5qtfk\") pod \"kube-proxy-cvr5p\" (UID: \"a3a2e3dc-2eb5-4b07-90c7-5b06ad4dc480\") " pod="kube-system/kube-proxy-cvr5p"
	Oct 25 09:46:18 pause-175355 kubelet[1296]: I1025 09:46:18.539742    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c5542753-32da-4749-bfdd-948d337adf13-cni-cfg\") pod \"kindnet-zb6d9\" (UID: \"c5542753-32da-4749-bfdd-948d337adf13\") " pod="kube-system/kindnet-zb6d9"
	Oct 25 09:46:18 pause-175355 kubelet[1296]: I1025 09:46:18.539761    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5542753-32da-4749-bfdd-948d337adf13-xtables-lock\") pod \"kindnet-zb6d9\" (UID: \"c5542753-32da-4749-bfdd-948d337adf13\") " pod="kube-system/kindnet-zb6d9"
	Oct 25 09:46:18 pause-175355 kubelet[1296]: I1025 09:46:18.997676    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-zb6d9" podStartSLOduration=0.99765176 podStartE2EDuration="997.65176ms" podCreationTimestamp="2025-10-25 09:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:46:18.985723957 +0000 UTC m=+6.164314378" watchObservedRunningTime="2025-10-25 09:46:18.99765176 +0000 UTC m=+6.176242193"
	Oct 25 09:46:19 pause-175355 kubelet[1296]: I1025 09:46:19.007793    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cvr5p" podStartSLOduration=1.007769213 podStartE2EDuration="1.007769213s" podCreationTimestamp="2025-10-25 09:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:46:19.007615461 +0000 UTC m=+6.186205883" watchObservedRunningTime="2025-10-25 09:46:19.007769213 +0000 UTC m=+6.186359634"
	Oct 25 09:46:29 pause-175355 kubelet[1296]: I1025 09:46:29.437512    1296 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 09:46:29 pause-175355 kubelet[1296]: I1025 09:46:29.520001    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22bdeb9a-e672-4e75-8488-afecc2d96283-config-volume\") pod \"coredns-66bc5c9577-s4rnp\" (UID: \"22bdeb9a-e672-4e75-8488-afecc2d96283\") " pod="kube-system/coredns-66bc5c9577-s4rnp"
	Oct 25 09:46:29 pause-175355 kubelet[1296]: I1025 09:46:29.520057    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtvbn\" (UniqueName: \"kubernetes.io/projected/22bdeb9a-e672-4e75-8488-afecc2d96283-kube-api-access-rtvbn\") pod \"coredns-66bc5c9577-s4rnp\" (UID: \"22bdeb9a-e672-4e75-8488-afecc2d96283\") " pod="kube-system/coredns-66bc5c9577-s4rnp"
	Oct 25 09:46:30 pause-175355 kubelet[1296]: I1025 09:46:30.012527    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-s4rnp" podStartSLOduration=12.012503308 podStartE2EDuration="12.012503308s" podCreationTimestamp="2025-10-25 09:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:46:30.012151267 +0000 UTC m=+17.190741690" watchObservedRunningTime="2025-10-25 09:46:30.012503308 +0000 UTC m=+17.191093729"
	Oct 25 09:46:33 pause-175355 kubelet[1296]: W1025 09:46:33.608974    1296 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 25 09:46:33 pause-175355 kubelet[1296]: E1025 09:46:33.609110    1296 log.go:32] "Version from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 25 09:46:39 pause-175355 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:46:39 pause-175355 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:46:39 pause-175355 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 09:46:39 pause-175355 systemd[1]: kubelet.service: Consumed 1.179s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-175355 -n pause-175355
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-175355 -n pause-175355: exit status 2 (351.922858ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-175355 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-175355
helpers_test.go:243: (dbg) docker inspect pause-175355:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bcece3f1833d47c1da67a583e847364b43831bd48b6e1427eb943e19a98df22f",
	        "Created": "2025-10-25T09:45:55.716408665Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 307201,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:45:55.77808971Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/bcece3f1833d47c1da67a583e847364b43831bd48b6e1427eb943e19a98df22f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bcece3f1833d47c1da67a583e847364b43831bd48b6e1427eb943e19a98df22f/hostname",
	        "HostsPath": "/var/lib/docker/containers/bcece3f1833d47c1da67a583e847364b43831bd48b6e1427eb943e19a98df22f/hosts",
	        "LogPath": "/var/lib/docker/containers/bcece3f1833d47c1da67a583e847364b43831bd48b6e1427eb943e19a98df22f/bcece3f1833d47c1da67a583e847364b43831bd48b6e1427eb943e19a98df22f-json.log",
	        "Name": "/pause-175355",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-175355:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-175355",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bcece3f1833d47c1da67a583e847364b43831bd48b6e1427eb943e19a98df22f",
	                "LowerDir": "/var/lib/docker/overlay2/6c5b03bf38486330493419c0c0d808151f6bb05dd74d334b1f6e24b620a5a633-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c5b03bf38486330493419c0c0d808151f6bb05dd74d334b1f6e24b620a5a633/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c5b03bf38486330493419c0c0d808151f6bb05dd74d334b1f6e24b620a5a633/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c5b03bf38486330493419c0c0d808151f6bb05dd74d334b1f6e24b620a5a633/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-175355",
	                "Source": "/var/lib/docker/volumes/pause-175355/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-175355",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-175355",
	                "name.minikube.sigs.k8s.io": "pause-175355",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ec19cef6faa2bb6db0d5c71b916ef7eb9866bc2c78d8c426c7241e072ca5a1b7",
	            "SandboxKey": "/var/run/docker/netns/ec19cef6faa2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-175355": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:96:69:95:ed:59",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "271348bbdff7b7aa2cf9305be03ebc3eb0f9fd766abd7db6c833b6914d5e67a6",
	                    "EndpointID": "eea7e3aebdff7b4a923de7ff3ffd9ec4677b52c79ccde984e205756d864d55de",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-175355",
	                        "bcece3f1833d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-175355 -n pause-175355
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-175355 -n pause-175355: exit status 2 (341.665436ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-175355 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-692905 --schedule 5m                                                                                      │ scheduled-stop-692905       │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │                     │
	│ stop    │ -p scheduled-stop-692905 --schedule 15s                                                                                     │ scheduled-stop-692905       │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │                     │
	│ stop    │ -p scheduled-stop-692905 --schedule 15s                                                                                     │ scheduled-stop-692905       │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │                     │
	│ stop    │ -p scheduled-stop-692905 --schedule 15s                                                                                     │ scheduled-stop-692905       │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │                     │
	│ stop    │ -p scheduled-stop-692905 --cancel-scheduled                                                                                 │ scheduled-stop-692905       │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │ 25 Oct 25 09:44 UTC │
	│ stop    │ -p scheduled-stop-692905 --schedule 15s                                                                                     │ scheduled-stop-692905       │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │                     │
	│ stop    │ -p scheduled-stop-692905 --schedule 15s                                                                                     │ scheduled-stop-692905       │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │                     │
	│ stop    │ -p scheduled-stop-692905 --schedule 15s                                                                                     │ scheduled-stop-692905       │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │ 25 Oct 25 09:45 UTC │
	│ delete  │ -p scheduled-stop-692905                                                                                                    │ scheduled-stop-692905       │ jenkins │ v1.37.0 │ 25 Oct 25 09:45 UTC │ 25 Oct 25 09:45 UTC │
	│ start   │ -p insufficient-storage-956587 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio            │ insufficient-storage-956587 │ jenkins │ v1.37.0 │ 25 Oct 25 09:45 UTC │                     │
	│ delete  │ -p insufficient-storage-956587                                                                                              │ insufficient-storage-956587 │ jenkins │ v1.37.0 │ 25 Oct 25 09:45 UTC │ 25 Oct 25 09:45 UTC │
	│ start   │ -p pause-175355 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-175355                │ jenkins │ v1.37.0 │ 25 Oct 25 09:45 UTC │ 25 Oct 25 09:46 UTC │
	│ start   │ -p cert-expiration-225615 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-225615      │ jenkins │ v1.37.0 │ 25 Oct 25 09:45 UTC │ 25 Oct 25 09:46 UTC │
	│ start   │ -p offline-crio-173316 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio           │ offline-crio-173316         │ jenkins │ v1.37.0 │ 25 Oct 25 09:45 UTC │ 25 Oct 25 09:46 UTC │
	│ start   │ -p NoKubernetes-617681 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio               │ NoKubernetes-617681         │ jenkins │ v1.37.0 │ 25 Oct 25 09:45 UTC │                     │
	│ start   │ -p NoKubernetes-617681 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                       │ NoKubernetes-617681         │ jenkins │ v1.37.0 │ 25 Oct 25 09:45 UTC │ 25 Oct 25 09:46 UTC │
	│ start   │ -p NoKubernetes-617681 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-617681         │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ delete  │ -p offline-crio-173316                                                                                                      │ offline-crio-173316         │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ delete  │ -p NoKubernetes-617681                                                                                                      │ NoKubernetes-617681         │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ start   │ -p pause-175355 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-175355                │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ start   │ -p NoKubernetes-617681 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-617681         │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ start   │ -p force-systemd-flag-170120 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-170120   │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │                     │
	│ pause   │ -p pause-175355 --alsologtostderr -v=5                                                                                      │ pause-175355                │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │                     │
	│ ssh     │ -p NoKubernetes-617681 sudo systemctl is-active --quiet service kubelet                                                     │ NoKubernetes-617681         │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │                     │
	│ stop    │ -p NoKubernetes-617681                                                                                                      │ NoKubernetes-617681         │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:46:33
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:46:33.966162  317940 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:46:33.966428  317940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:46:33.966438  317940 out.go:374] Setting ErrFile to fd 2...
	I1025 09:46:33.966442  317940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:46:33.966670  317940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:46:33.967143  317940 out.go:368] Setting JSON to false
	I1025 09:46:33.968378  317940 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5338,"bootTime":1761380256,"procs":258,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:46:33.968565  317940 start.go:141] virtualization: kvm guest
	I1025 09:46:33.970159  317940 out.go:179] * [force-systemd-flag-170120] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:46:33.971341  317940 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:46:33.971340  317940 notify.go:220] Checking for updates...
	I1025 09:46:33.973460  317940 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:46:33.974528  317940 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:46:33.975534  317940 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:46:33.976606  317940 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:46:33.979894  317940 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:46:33.981360  317940 config.go:182] Loaded profile config "cert-expiration-225615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:46:33.981503  317940 config.go:182] Loaded profile config "pause-175355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:46:33.981596  317940 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:46:34.006150  317940 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:46:34.006296  317940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:46:34.070437  317940 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-25 09:46:34.060262263 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:46:34.070594  317940 docker.go:318] overlay module found
	I1025 09:46:34.073072  317940 out.go:179] * Using the docker driver based on user configuration
	I1025 09:46:34.074288  317940 start.go:305] selected driver: docker
	I1025 09:46:34.074319  317940 start.go:925] validating driver "docker" against <nil>
	I1025 09:46:34.074337  317940 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:46:34.074926  317940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:46:34.136051  317940 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-25 09:46:34.125341353 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:46:34.136237  317940 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:46:34.136478  317940 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 09:46:34.138053  317940 out.go:179] * Using Docker driver with root privileges
	I1025 09:46:34.139025  317940 cni.go:84] Creating CNI manager for ""
	I1025 09:46:34.139087  317940 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:46:34.139098  317940 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:46:34.139161  317940 start.go:349] cluster config:
	{Name:force-systemd-flag-170120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-170120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:46:34.140227  317940 out.go:179] * Starting "force-systemd-flag-170120" primary control-plane node in "force-systemd-flag-170120" cluster
	I1025 09:46:34.141125  317940 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:46:34.142208  317940 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:46:34.143319  317940 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:46:34.143362  317940 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:46:34.143375  317940 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:46:34.143391  317940 cache.go:58] Caching tarball of preloaded images
	I1025 09:46:34.143472  317940 preload.go:233] Found /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:46:34.143483  317940 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:46:34.143565  317940 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/force-systemd-flag-170120/config.json ...
	I1025 09:46:34.143582  317940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/force-systemd-flag-170120/config.json: {Name:mkfec303a44bf9939d06b48780c3a32e4337567e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:34.164614  317940 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:46:34.164646  317940 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:46:34.164666  317940 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:46:34.164702  317940 start.go:360] acquireMachinesLock for force-systemd-flag-170120: {Name:mk2131c503a45e338d8dbe5954a03f97b858783e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:46:34.164813  317940 start.go:364] duration metric: took 90.701µs to acquireMachinesLock for "force-systemd-flag-170120"
	I1025 09:46:34.164843  317940 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-170120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-170120 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:46:34.164929  317940 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:46:33.941721  317730 preload.go:183] Checking if preload exists for k8s version v0.0.0 and runtime crio
	I1025 09:46:33.941760  317730 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:46:33.965107  317730 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:46:33.965128  317730 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	W1025 09:46:34.269225  317730 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1025 09:46:34.287653  317730 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1025 09:46:34.287819  317730 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/NoKubernetes-617681/config.json ...
	I1025 09:46:34.287872  317730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/NoKubernetes-617681/config.json: {Name:mke4c1ba5b16c11429d9b7a3c3c5ca075bb38142 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:34.288037  317730 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:46:34.288077  317730 start.go:360] acquireMachinesLock for NoKubernetes-617681: {Name:mk55e5c71f2b935be2255dc6056c6bd549f8a5b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:46:34.288129  317730 start.go:364] duration metric: took 31.088µs to acquireMachinesLock for "NoKubernetes-617681"
	I1025 09:46:34.288146  317730 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-617681 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-617681 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:46:34.288225  317730 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:46:32.159100  316943 out.go:252] * Updating the running docker "pause-175355" container ...
	I1025 09:46:32.159132  316943 machine.go:93] provisionDockerMachine start ...
	I1025 09:46:32.159218  316943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-175355
	I1025 09:46:32.178619  316943 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:32.178843  316943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1025 09:46:32.178854  316943 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:46:32.321360  316943 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-175355
	
	I1025 09:46:32.321396  316943 ubuntu.go:182] provisioning hostname "pause-175355"
	I1025 09:46:32.321460  316943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-175355
	I1025 09:46:32.340237  316943 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:32.340475  316943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1025 09:46:32.340490  316943 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-175355 && echo "pause-175355" | sudo tee /etc/hostname
	I1025 09:46:32.529633  316943 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-175355
	
	I1025 09:46:32.529716  316943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-175355
	I1025 09:46:32.548442  316943 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:32.548659  316943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1025 09:46:32.548675  316943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-175355' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-175355/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-175355' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:46:32.687882  316943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:46:32.687917  316943 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 09:46:32.687953  316943 ubuntu.go:190] setting up certificates
	I1025 09:46:32.687962  316943 provision.go:84] configureAuth start
	I1025 09:46:32.688013  316943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-175355
	I1025 09:46:32.705706  316943 provision.go:143] copyHostCerts
	I1025 09:46:32.705767  316943 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem, removing ...
	I1025 09:46:32.705779  316943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:46:32.705853  316943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 09:46:32.705945  316943 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem, removing ...
	I1025 09:46:32.705954  316943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:46:32.705987  316943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 09:46:32.706039  316943 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem, removing ...
	I1025 09:46:32.706048  316943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:46:32.706077  316943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 09:46:32.706127  316943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.pause-175355 san=[127.0.0.1 192.168.85.2 localhost minikube pause-175355]
	I1025 09:46:33.144622  316943 provision.go:177] copyRemoteCerts
	I1025 09:46:33.144675  316943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:46:33.144717  316943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-175355
	I1025 09:46:33.163770  316943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/pause-175355/id_rsa Username:docker}
	I1025 09:46:33.267048  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:46:33.305869  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:46:33.327104  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 09:46:33.362413  316943 provision.go:87] duration metric: took 674.429895ms to configureAuth
	I1025 09:46:33.362446  316943 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:46:33.362719  316943 config.go:182] Loaded profile config "pause-175355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:46:33.362841  316943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-175355
	I1025 09:46:33.383946  316943 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:33.384228  316943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1025 09:46:33.384256  316943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:46:33.711290  316943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:46:33.711317  316943 machine.go:96] duration metric: took 1.552176673s to provisionDockerMachine
	I1025 09:46:33.711330  316943 start.go:293] postStartSetup for "pause-175355" (driver="docker")
	I1025 09:46:33.711362  316943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:46:33.711422  316943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:46:33.711479  316943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-175355
	I1025 09:46:33.731889  316943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/pause-175355/id_rsa Username:docker}
	I1025 09:46:33.837428  316943 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:46:33.842409  316943 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:46:33.842448  316943 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:46:33.842472  316943 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 09:46:33.842541  316943 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 09:46:33.842682  316943 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> 1341452.pem in /etc/ssl/certs
	I1025 09:46:33.842801  316943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:46:33.852191  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:46:33.871831  316943 start.go:296] duration metric: took 160.486562ms for postStartSetup
	I1025 09:46:33.871919  316943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:46:33.871967  316943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-175355
	I1025 09:46:33.893328  316943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/pause-175355/id_rsa Username:docker}
	I1025 09:46:34.001282  316943 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:46:34.006718  316943 fix.go:56] duration metric: took 1.867432854s for fixHost
	I1025 09:46:34.006748  316943 start.go:83] releasing machines lock for "pause-175355", held for 1.86748032s
	I1025 09:46:34.006807  316943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-175355
	I1025 09:46:34.025007  316943 ssh_runner.go:195] Run: cat /version.json
	I1025 09:46:34.025077  316943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-175355
	I1025 09:46:34.025155  316943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:46:34.025223  316943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-175355
	I1025 09:46:34.047229  316943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/pause-175355/id_rsa Username:docker}
	I1025 09:46:34.049402  316943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/pause-175355/id_rsa Username:docker}
	I1025 09:46:34.213618  316943 ssh_runner.go:195] Run: systemctl --version
	I1025 09:46:34.220434  316943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:46:34.262910  316943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:46:34.267719  316943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:46:34.267799  316943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:46:34.276139  316943 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:46:34.276171  316943 start.go:495] detecting cgroup driver to use...
	I1025 09:46:34.276203  316943 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:46:34.276252  316943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:46:34.293274  316943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:46:34.307568  316943 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:46:34.307633  316943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:46:34.324969  316943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:46:34.340292  316943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:46:34.484388  316943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:46:34.601154  316943 docker.go:234] disabling docker service ...
	I1025 09:46:34.601222  316943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:46:34.618747  316943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:46:34.639288  316943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:46:34.783913  316943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:46:34.941385  316943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:46:34.960847  316943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:46:34.985130  316943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:46:34.985532  316943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:35.005222  316943 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:46:35.005421  316943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:35.019304  316943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:35.030901  316943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:35.041992  316943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:46:35.051394  316943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:35.070375  316943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:35.080430  316943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:35.090702  316943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:46:35.099012  316943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:46:35.109159  316943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:46:35.253971  316943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:46:35.683756  316943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:46:35.683829  316943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:46:35.688157  316943 start.go:563] Will wait 60s for crictl version
	I1025 09:46:35.688222  316943 ssh_runner.go:195] Run: which crictl
	I1025 09:46:35.692767  316943 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:46:35.727936  316943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:46:35.728022  316943 ssh_runner.go:195] Run: crio --version
	I1025 09:46:35.761195  316943 ssh_runner.go:195] Run: crio --version
	I1025 09:46:35.801171  316943 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:46:35.802402  316943 cli_runner.go:164] Run: docker network inspect pause-175355 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:46:35.824758  316943 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 09:46:35.829654  316943 kubeadm.go:883] updating cluster {Name:pause-175355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-175355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:46:35.829843  316943 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:46:35.829897  316943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:46:35.866606  316943 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:46:35.866636  316943 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:46:35.866700  316943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:46:35.908939  316943 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:46:35.908967  316943 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:46:35.908977  316943 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 09:46:35.909111  316943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-175355 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-175355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:46:35.909248  316943 ssh_runner.go:195] Run: crio config
	I1025 09:46:35.973712  316943 cni.go:84] Creating CNI manager for ""
	I1025 09:46:35.973737  316943 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:46:35.973760  316943 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:46:35.973788  316943 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-175355 NodeName:pause-175355 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:46:35.973952  316943 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-175355"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:46:35.974033  316943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:46:35.982744  316943 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:46:35.982816  316943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:46:35.990887  316943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1025 09:46:36.005828  316943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:46:36.020758  316943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1025 09:46:36.034986  316943 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:46:36.039081  316943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:46:36.180761  316943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:46:36.194476  316943 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355 for IP: 192.168.85.2
	I1025 09:46:36.194502  316943 certs.go:195] generating shared ca certs ...
	I1025 09:46:36.194525  316943 certs.go:227] acquiring lock for ca certs: {Name:mk84f00dc0ba6e3a6eb84ff47b0ea60692217fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:36.194759  316943 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key
	I1025 09:46:36.194837  316943 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key
	I1025 09:46:36.194859  316943 certs.go:257] generating profile certs ...
	I1025 09:46:36.194976  316943 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/client.key
	I1025 09:46:36.195050  316943 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/apiserver.key.8c617dd2
	I1025 09:46:36.195130  316943 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/proxy-client.key
	I1025 09:46:36.195301  316943 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem (1338 bytes)
	W1025 09:46:36.195360  316943 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145_empty.pem, impossibly tiny 0 bytes
	I1025 09:46:36.195376  316943 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:46:36.195418  316943 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:46:36.195464  316943 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:46:36.195497  316943 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem (1675 bytes)
	I1025 09:46:36.195565  316943 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:46:36.196520  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:46:36.217261  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:46:36.235533  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:46:36.253574  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:46:36.271090  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 09:46:36.289145  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:46:36.308709  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:46:36.328016  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 09:46:36.346612  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:46:36.369662  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem --> /usr/share/ca-certificates/134145.pem (1338 bytes)
	I1025 09:46:36.389784  316943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /usr/share/ca-certificates/1341452.pem (1708 bytes)
	I1025 09:46:36.472333  316943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:46:36.486396  316943 ssh_runner.go:195] Run: openssl version
	I1025 09:46:36.493604  316943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:46:36.513649  316943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:46:36.517992  316943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:59 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:46:36.518047  316943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:46:36.556084  316943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:46:36.566489  316943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134145.pem && ln -fs /usr/share/ca-certificates/134145.pem /etc/ssl/certs/134145.pem"
	I1025 09:46:36.578972  316943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134145.pem
	I1025 09:46:36.585247  316943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:05 /usr/share/ca-certificates/134145.pem
	I1025 09:46:36.585321  316943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134145.pem
	I1025 09:46:36.635079  316943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134145.pem /etc/ssl/certs/51391683.0"
	I1025 09:46:36.645174  316943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1341452.pem && ln -fs /usr/share/ca-certificates/1341452.pem /etc/ssl/certs/1341452.pem"
	I1025 09:46:36.655676  316943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1341452.pem
	I1025 09:46:36.660390  316943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:05 /usr/share/ca-certificates/1341452.pem
	I1025 09:46:36.660459  316943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1341452.pem
	I1025 09:46:36.706250  316943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1341452.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:46:36.716687  316943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:46:36.721491  316943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:46:36.762118  316943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:46:36.807654  316943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:46:36.848379  316943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:46:36.904917  316943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:46:36.942499  316943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:46:36.980714  316943 kubeadm.go:400] StartCluster: {Name:pause-175355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-175355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:46:36.980841  316943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:46:36.980910  316943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:46:37.013827  316943 cri.go:89] found id: "9e2c562768d3ef40e165f634b295addfa5cb5274c101de18e0ab8490c95e8116"
	I1025 09:46:37.013850  316943 cri.go:89] found id: "a03995ac4fccce122a9c1002f2459b9981e66d7e2725203dabc6c68c7494cc34"
	I1025 09:46:37.013856  316943 cri.go:89] found id: "5a8d746e97640c6b363685a4b3c2c1a0914d9ab9fe85a43546d4e990f24b8958"
	I1025 09:46:37.013861  316943 cri.go:89] found id: "ad3565a09ce225576f9b373c237fdbcf567fad3f696a31c2511744550b317595"
	I1025 09:46:37.013864  316943 cri.go:89] found id: "7d005017d0bf28c57b242a453d705732d305644fa622751689bb381580ae0cc9"
	I1025 09:46:37.013869  316943 cri.go:89] found id: "6c4196345d8b07529fb0ecb3b623137f5db068aabe551ff91243a064f9e8040e"
	I1025 09:46:37.013873  316943 cri.go:89] found id: "e5339095bdcaff6188dc8b28e527f13dbdfc4e3e0f4daa38be912228b489c18d"
	I1025 09:46:37.013877  316943 cri.go:89] found id: ""
	I1025 09:46:37.013928  316943 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:46:37.027176  316943 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:46:37Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:46:37.027243  316943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:46:37.036265  316943 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:46:37.036289  316943 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:46:37.036338  316943 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:46:37.044405  316943 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:46:37.045255  316943 kubeconfig.go:125] found "pause-175355" server: "https://192.168.85.2:8443"
	I1025 09:46:37.046309  316943 kapi.go:59] client config for pause-175355: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/client.crt", KeyFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/client.key", CAFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:46:37.046926  316943 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1025 09:46:37.046949  316943 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 09:46:37.046958  316943 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1025 09:46:37.046970  316943 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1025 09:46:37.046977  316943 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 09:46:37.047472  316943 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:46:37.055761  316943 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 09:46:37.055798  316943 kubeadm.go:601] duration metric: took 19.50313ms to restartPrimaryControlPlane
	I1025 09:46:37.055807  316943 kubeadm.go:402] duration metric: took 75.112206ms to StartCluster
	I1025 09:46:37.055823  316943 settings.go:142] acquiring lock: {Name:mke1e64be0ec6edf2eef6e52eb10d83b59bb8c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:37.055903  316943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:46:37.056983  316943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:37.124958  316943 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:46:37.125103  316943 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:46:37.125210  316943 config.go:182] Loaded profile config "pause-175355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:46:37.129924  316943 out.go:179] * Verifying Kubernetes components...
	I1025 09:46:37.129936  316943 out.go:179] * Enabled addons: 
	I1025 09:46:34.290118  317730 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:46:34.290398  317730 start.go:159] libmachine.API.Create for "NoKubernetes-617681" (driver="docker")
	I1025 09:46:34.290459  317730 client.go:168] LocalClient.Create starting
	I1025 09:46:34.290522  317730 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem
	I1025 09:46:34.290560  317730 main.go:141] libmachine: Decoding PEM data...
	I1025 09:46:34.290581  317730 main.go:141] libmachine: Parsing certificate...
	I1025 09:46:34.290695  317730 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem
	I1025 09:46:34.290738  317730 main.go:141] libmachine: Decoding PEM data...
	I1025 09:46:34.290754  317730 main.go:141] libmachine: Parsing certificate...
	I1025 09:46:34.291239  317730 cli_runner.go:164] Run: docker network inspect NoKubernetes-617681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:46:34.312014  317730 cli_runner.go:211] docker network inspect NoKubernetes-617681 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:46:34.312096  317730 network_create.go:284] running [docker network inspect NoKubernetes-617681] to gather additional debugging logs...
	I1025 09:46:34.312136  317730 cli_runner.go:164] Run: docker network inspect NoKubernetes-617681
	W1025 09:46:34.330954  317730 cli_runner.go:211] docker network inspect NoKubernetes-617681 returned with exit code 1
	I1025 09:46:34.330983  317730 network_create.go:287] error running [docker network inspect NoKubernetes-617681]: docker network inspect NoKubernetes-617681: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network NoKubernetes-617681 not found
	I1025 09:46:34.330995  317730 network_create.go:289] output of [docker network inspect NoKubernetes-617681]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network NoKubernetes-617681 not found
	
	** /stderr **
	I1025 09:46:34.331081  317730 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:46:34.351886  317730 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b89a58b7fce0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:e2:93:21:98:bc} reservation:<nil>}
	I1025 09:46:34.352685  317730 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4482374e86a6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:20:65:c1:4a:19} reservation:<nil>}
	I1025 09:46:34.353316  317730 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7323bc384751 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:33:7f:07:f5:30} reservation:<nil>}
	I1025 09:46:34.353982  317730 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b7a1ea657c41 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0e:10:ed:26:f0:49} reservation:<nil>}
	I1025 09:46:34.354688  317730 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-271348bbdff7 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:4a:9e:f5:f5:c8:7a} reservation:<nil>}
	I1025 09:46:34.355492  317730 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-b7e4f9cc4b1b IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:da:32:85:b6:c0:99} reservation:<nil>}
	I1025 09:46:34.356188  317730 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea0f30}
	I1025 09:46:34.356217  317730 network_create.go:124] attempt to create docker network NoKubernetes-617681 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1025 09:46:34.356260  317730 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-617681 NoKubernetes-617681
	I1025 09:46:34.429499  317730 network_create.go:108] docker network NoKubernetes-617681 192.168.103.0/24 created
	I1025 09:46:34.429529  317730 kic.go:121] calculated static IP "192.168.103.2" for the "NoKubernetes-617681" container
	I1025 09:46:34.429595  317730 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:46:34.448285  317730 cli_runner.go:164] Run: docker volume create NoKubernetes-617681 --label name.minikube.sigs.k8s.io=NoKubernetes-617681 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:46:34.466046  317730 oci.go:103] Successfully created a docker volume NoKubernetes-617681
	I1025 09:46:34.466145  317730 cli_runner.go:164] Run: docker run --rm --name NoKubernetes-617681-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-617681 --entrypoint /usr/bin/test -v NoKubernetes-617681:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:46:34.941735  317730 oci.go:107] Successfully prepared a docker volume NoKubernetes-617681
	I1025 09:46:34.941778  317730 preload.go:183] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W1025 09:46:34.941889  317730 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:46:34.941922  317730 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:46:34.942110  317730 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:46:35.020795  317730 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname NoKubernetes-617681 --name NoKubernetes-617681 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-617681 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=NoKubernetes-617681 --network NoKubernetes-617681 --ip 192.168.103.2 --volume NoKubernetes-617681:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:46:35.536657  317730 cli_runner.go:164] Run: docker container inspect NoKubernetes-617681 --format={{.State.Running}}
	I1025 09:46:35.558892  317730 cli_runner.go:164] Run: docker container inspect NoKubernetes-617681 --format={{.State.Status}}
	I1025 09:46:35.581909  317730 cli_runner.go:164] Run: docker exec NoKubernetes-617681 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:46:35.638026  317730 oci.go:144] the created container "NoKubernetes-617681" has a running status.
	I1025 09:46:35.638074  317730 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/NoKubernetes-617681/id_rsa...
	I1025 09:46:36.545559  317730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/NoKubernetes-617681/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1025 09:46:36.545659  317730 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21794-130604/.minikube/machines/NoKubernetes-617681/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:46:36.578585  317730 cli_runner.go:164] Run: docker container inspect NoKubernetes-617681 --format={{.State.Status}}
	I1025 09:46:36.603599  317730 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:46:36.603630  317730 kic_runner.go:114] Args: [docker exec --privileged NoKubernetes-617681 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:46:36.657939  317730 cli_runner.go:164] Run: docker container inspect NoKubernetes-617681 --format={{.State.Status}}
	I1025 09:46:36.680157  317730 machine.go:93] provisionDockerMachine start ...
	I1025 09:46:36.680263  317730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-617681
	I1025 09:46:36.702944  317730 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:36.703324  317730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1025 09:46:36.703361  317730 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:46:36.853535  317730 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-617681
	
	I1025 09:46:36.853566  317730 ubuntu.go:182] provisioning hostname "NoKubernetes-617681"
	I1025 09:46:36.853633  317730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-617681
	I1025 09:46:36.874492  317730 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:36.874836  317730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1025 09:46:36.874858  317730 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-617681 && echo "NoKubernetes-617681" | sudo tee /etc/hostname
	I1025 09:46:37.126643  317730 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-617681
	
	I1025 09:46:37.126719  317730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-617681
	I1025 09:46:37.147411  317730 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:37.147692  317730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1025 09:46:37.147719  317730 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-617681' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-617681/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-617681' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:46:37.298384  317730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:46:37.298414  317730 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 09:46:37.298441  317730 ubuntu.go:190] setting up certificates
	I1025 09:46:37.298454  317730 provision.go:84] configureAuth start
	I1025 09:46:37.298507  317730 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-617681
	I1025 09:46:37.319768  317730 provision.go:143] copyHostCerts
	I1025 09:46:37.319817  317730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:46:37.319858  317730 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem, removing ...
	I1025 09:46:37.319879  317730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:46:37.319968  317730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 09:46:37.320067  317730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:46:37.320097  317730 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem, removing ...
	I1025 09:46:37.320108  317730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:46:37.320150  317730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 09:46:37.320217  317730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:46:37.320244  317730 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem, removing ...
	I1025 09:46:37.320254  317730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:46:37.320292  317730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 09:46:37.320396  317730 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-617681 san=[127.0.0.1 192.168.103.2 NoKubernetes-617681 localhost minikube]
	I1025 09:46:37.529684  317730 provision.go:177] copyRemoteCerts
	I1025 09:46:37.529746  317730 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:46:37.529787  317730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-617681
	I1025 09:46:37.551104  317730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/NoKubernetes-617681/id_rsa Username:docker}
	I1025 09:46:37.654222  317730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 09:46:37.654281  317730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:46:37.676402  317730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 09:46:37.676480  317730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 09:46:37.694310  317730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 09:46:37.694400  317730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:46:37.712074  317730 provision.go:87] duration metric: took 413.605345ms to configureAuth
	I1025 09:46:37.712104  317730 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:46:37.712282  317730 config.go:182] Loaded profile config "NoKubernetes-617681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1025 09:46:37.712420  317730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-617681
	I1025 09:46:37.731037  317730 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:37.731244  317730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1025 09:46:37.731259  317730 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:46:38.240908  317730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:46:38.240941  317730 machine.go:96] duration metric: took 1.560757558s to provisionDockerMachine
	I1025 09:46:38.240953  317730 client.go:171] duration metric: took 3.950482093s to LocalClient.Create
	I1025 09:46:38.240974  317730 start.go:167] duration metric: took 3.950586803s to libmachine.API.Create "NoKubernetes-617681"
	I1025 09:46:38.240981  317730 start.go:293] postStartSetup for "NoKubernetes-617681" (driver="docker")
	I1025 09:46:38.240990  317730 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:46:38.241053  317730 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:46:38.241101  317730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-617681
	I1025 09:46:38.258476  317730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/NoKubernetes-617681/id_rsa Username:docker}
	I1025 09:46:38.592659  317730 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:46:38.596864  317730 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:46:38.596895  317730 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:46:38.596909  317730 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 09:46:38.596979  317730 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 09:46:38.597055  317730 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> 1341452.pem in /etc/ssl/certs
	I1025 09:46:38.597065  317730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> /etc/ssl/certs/1341452.pem
	I1025 09:46:38.597144  317730 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:46:38.605225  317730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:46:38.626381  317730 start.go:296] duration metric: took 385.387015ms for postStartSetup
	I1025 09:46:38.626741  317730 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-617681
	I1025 09:46:38.645791  317730 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/NoKubernetes-617681/config.json ...
	I1025 09:46:38.646143  317730 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:46:38.646204  317730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-617681
	I1025 09:46:38.666462  317730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/NoKubernetes-617681/id_rsa Username:docker}
	I1025 09:46:34.169464  317940 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:46:34.169712  317940 start.go:159] libmachine.API.Create for "force-systemd-flag-170120" (driver="docker")
	I1025 09:46:34.169747  317940 client.go:168] LocalClient.Create starting
	I1025 09:46:34.169837  317940 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem
	I1025 09:46:34.169885  317940 main.go:141] libmachine: Decoding PEM data...
	I1025 09:46:34.169913  317940 main.go:141] libmachine: Parsing certificate...
	I1025 09:46:34.170001  317940 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem
	I1025 09:46:34.170034  317940 main.go:141] libmachine: Decoding PEM data...
	I1025 09:46:34.170048  317940 main.go:141] libmachine: Parsing certificate...
	I1025 09:46:34.170446  317940 cli_runner.go:164] Run: docker network inspect force-systemd-flag-170120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:46:34.188296  317940 cli_runner.go:211] docker network inspect force-systemd-flag-170120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:46:34.188403  317940 network_create.go:284] running [docker network inspect force-systemd-flag-170120] to gather additional debugging logs...
	I1025 09:46:34.188430  317940 cli_runner.go:164] Run: docker network inspect force-systemd-flag-170120
	W1025 09:46:34.204892  317940 cli_runner.go:211] docker network inspect force-systemd-flag-170120 returned with exit code 1
	I1025 09:46:34.204922  317940 network_create.go:287] error running [docker network inspect force-systemd-flag-170120]: docker network inspect force-systemd-flag-170120: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-170120 not found
	I1025 09:46:34.204938  317940 network_create.go:289] output of [docker network inspect force-systemd-flag-170120]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-170120 not found
	
	** /stderr **
	I1025 09:46:34.205063  317940 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:46:34.224591  317940 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b89a58b7fce0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:e2:93:21:98:bc} reservation:<nil>}
	I1025 09:46:34.225267  317940 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4482374e86a6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:20:65:c1:4a:19} reservation:<nil>}
	I1025 09:46:34.225953  317940 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7323bc384751 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:33:7f:07:f5:30} reservation:<nil>}
	I1025 09:46:34.226700  317940 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b7a1ea657c41 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0e:10:ed:26:f0:49} reservation:<nil>}
	I1025 09:46:34.227504  317940 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-271348bbdff7 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:4a:9e:f5:f5:c8:7a} reservation:<nil>}
	I1025 09:46:34.228432  317940 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e66ac0}
	I1025 09:46:34.228457  317940 network_create.go:124] attempt to create docker network force-systemd-flag-170120 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1025 09:46:34.228502  317940 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-170120 force-systemd-flag-170120
	I1025 09:46:34.293561  317940 network_create.go:108] docker network force-systemd-flag-170120 192.168.94.0/24 created
	I1025 09:46:34.293612  317940 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-flag-170120" container
	I1025 09:46:34.293704  317940 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:46:34.314078  317940 cli_runner.go:164] Run: docker volume create force-systemd-flag-170120 --label name.minikube.sigs.k8s.io=force-systemd-flag-170120 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:46:34.333548  317940 oci.go:103] Successfully created a docker volume force-systemd-flag-170120
	I1025 09:46:34.333701  317940 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-170120-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-170120 --entrypoint /usr/bin/test -v force-systemd-flag-170120:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:46:34.798380  317940 oci.go:107] Successfully prepared a docker volume force-systemd-flag-170120
	I1025 09:46:34.798431  317940 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:46:34.798459  317940 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:46:34.798543  317940 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-170120:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 09:46:38.625886  317940 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-170120:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (3.827298561s)
	I1025 09:46:38.625934  317940 kic.go:203] duration metric: took 3.827474108s to extract preloaded images to volume ...
	W1025 09:46:38.626042  317940 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:46:38.626090  317940 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:46:38.626146  317940 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:46:38.686830  317940 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-170120 --name force-systemd-flag-170120 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-170120 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-170120 --network force-systemd-flag-170120 --ip 192.168.94.2 --volume force-systemd-flag-170120:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:46:37.132157  316943 addons.go:514] duration metric: took 7.065139ms for enable addons: enabled=[]
	I1025 09:46:37.132192  316943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:46:37.260389  316943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:46:37.273327  316943 node_ready.go:35] waiting up to 6m0s for node "pause-175355" to be "Ready" ...
	I1025 09:46:37.281111  316943 node_ready.go:49] node "pause-175355" is "Ready"
	I1025 09:46:37.281138  316943 node_ready.go:38] duration metric: took 7.758092ms for node "pause-175355" to be "Ready" ...
	I1025 09:46:37.281154  316943 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:46:37.281208  316943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:46:37.293084  316943 api_server.go:72] duration metric: took 168.068963ms to wait for apiserver process to appear ...
	I1025 09:46:37.293111  316943 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:46:37.293138  316943 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:46:37.298267  316943 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 09:46:37.299435  316943 api_server.go:141] control plane version: v1.34.1
	I1025 09:46:37.299464  316943 api_server.go:131] duration metric: took 6.344829ms to wait for apiserver health ...
	I1025 09:46:37.299475  316943 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:46:37.302903  316943 system_pods.go:59] 7 kube-system pods found
	I1025 09:46:37.302939  316943 system_pods.go:61] "coredns-66bc5c9577-s4rnp" [22bdeb9a-e672-4e75-8488-afecc2d96283] Running
	I1025 09:46:37.302945  316943 system_pods.go:61] "etcd-pause-175355" [14fe9006-b376-4914-98ab-fd22e19d6f99] Running
	I1025 09:46:37.302949  316943 system_pods.go:61] "kindnet-zb6d9" [c5542753-32da-4749-bfdd-948d337adf13] Running
	I1025 09:46:37.302953  316943 system_pods.go:61] "kube-apiserver-pause-175355" [b3d3ef22-7ef5-44da-8a1c-a189970e788f] Running
	I1025 09:46:37.302956  316943 system_pods.go:61] "kube-controller-manager-pause-175355" [47a8931d-33a9-48e3-b96e-8967bddc533d] Running
	I1025 09:46:37.302958  316943 system_pods.go:61] "kube-proxy-cvr5p" [a3a2e3dc-2eb5-4b07-90c7-5b06ad4dc480] Running
	I1025 09:46:37.302961  316943 system_pods.go:61] "kube-scheduler-pause-175355" [37300eb1-351d-48cc-973b-b7554cf78b07] Running
	I1025 09:46:37.302966  316943 system_pods.go:74] duration metric: took 3.485271ms to wait for pod list to return data ...
	I1025 09:46:37.302976  316943 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:46:37.304838  316943 default_sa.go:45] found service account: "default"
	I1025 09:46:37.304861  316943 default_sa.go:55] duration metric: took 1.87752ms for default service account to be created ...
	I1025 09:46:37.304872  316943 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:46:37.307661  316943 system_pods.go:86] 7 kube-system pods found
	I1025 09:46:37.307685  316943 system_pods.go:89] "coredns-66bc5c9577-s4rnp" [22bdeb9a-e672-4e75-8488-afecc2d96283] Running
	I1025 09:46:37.307690  316943 system_pods.go:89] "etcd-pause-175355" [14fe9006-b376-4914-98ab-fd22e19d6f99] Running
	I1025 09:46:37.307694  316943 system_pods.go:89] "kindnet-zb6d9" [c5542753-32da-4749-bfdd-948d337adf13] Running
	I1025 09:46:37.307697  316943 system_pods.go:89] "kube-apiserver-pause-175355" [b3d3ef22-7ef5-44da-8a1c-a189970e788f] Running
	I1025 09:46:37.307700  316943 system_pods.go:89] "kube-controller-manager-pause-175355" [47a8931d-33a9-48e3-b96e-8967bddc533d] Running
	I1025 09:46:37.307713  316943 system_pods.go:89] "kube-proxy-cvr5p" [a3a2e3dc-2eb5-4b07-90c7-5b06ad4dc480] Running
	I1025 09:46:37.307720  316943 system_pods.go:89] "kube-scheduler-pause-175355" [37300eb1-351d-48cc-973b-b7554cf78b07] Running
	I1025 09:46:37.307729  316943 system_pods.go:126] duration metric: took 2.849429ms to wait for k8s-apps to be running ...
	I1025 09:46:37.307742  316943 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:46:37.307794  316943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:46:37.322874  316943 system_svc.go:56] duration metric: took 15.115051ms WaitForService to wait for kubelet
	I1025 09:46:37.322905  316943 kubeadm.go:586] duration metric: took 197.895327ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:46:37.322927  316943 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:46:37.325681  316943 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:46:37.325704  316943 node_conditions.go:123] node cpu capacity is 8
	I1025 09:46:37.325715  316943 node_conditions.go:105] duration metric: took 2.782916ms to run NodePressure ...
	I1025 09:46:37.325730  316943 start.go:241] waiting for startup goroutines ...
	I1025 09:46:37.325739  316943 start.go:246] waiting for cluster config update ...
	I1025 09:46:37.325750  316943 start.go:255] writing updated cluster config ...
	I1025 09:46:37.326046  316943 ssh_runner.go:195] Run: rm -f paused
	I1025 09:46:37.329972  316943 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:46:37.330745  316943 kapi.go:59] client config for pause-175355: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/client.crt", KeyFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/profiles/pause-175355/client.key", CAFile:"/home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:46:37.333496  316943 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s4rnp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:37.337507  316943 pod_ready.go:94] pod "coredns-66bc5c9577-s4rnp" is "Ready"
	I1025 09:46:37.337529  316943 pod_ready.go:86] duration metric: took 4.010415ms for pod "coredns-66bc5c9577-s4rnp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:37.339470  316943 pod_ready.go:83] waiting for pod "etcd-pause-175355" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:37.343242  316943 pod_ready.go:94] pod "etcd-pause-175355" is "Ready"
	I1025 09:46:37.343268  316943 pod_ready.go:86] duration metric: took 3.776089ms for pod "etcd-pause-175355" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:37.345158  316943 pod_ready.go:83] waiting for pod "kube-apiserver-pause-175355" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:37.348985  316943 pod_ready.go:94] pod "kube-apiserver-pause-175355" is "Ready"
	I1025 09:46:37.349007  316943 pod_ready.go:86] duration metric: took 3.827259ms for pod "kube-apiserver-pause-175355" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:37.351068  316943 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-175355" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:37.734479  316943 pod_ready.go:94] pod "kube-controller-manager-pause-175355" is "Ready"
	I1025 09:46:37.734503  316943 pod_ready.go:86] duration metric: took 383.41353ms for pod "kube-controller-manager-pause-175355" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:37.934657  316943 pod_ready.go:83] waiting for pod "kube-proxy-cvr5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:38.334715  316943 pod_ready.go:94] pod "kube-proxy-cvr5p" is "Ready"
	I1025 09:46:38.334743  316943 pod_ready.go:86] duration metric: took 400.060172ms for pod "kube-proxy-cvr5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:38.534755  316943 pod_ready.go:83] waiting for pod "kube-scheduler-pause-175355" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:38.934748  316943 pod_ready.go:94] pod "kube-scheduler-pause-175355" is "Ready"
	I1025 09:46:38.934778  316943 pod_ready.go:86] duration metric: took 399.995258ms for pod "kube-scheduler-pause-175355" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:46:38.934793  316943 pod_ready.go:40] duration metric: took 1.604772098s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:46:38.984662  316943 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:46:38.986472  316943 out.go:179] * Done! kubectl is now configured to use "pause-175355" cluster and "default" namespace by default
	I1025 09:46:38.767646  317730 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:46:38.772183  317730 start.go:128] duration metric: took 4.483941993s to createHost
	I1025 09:46:38.772212  317730 start.go:83] releasing machines lock for "NoKubernetes-617681", held for 4.484072285s
	I1025 09:46:38.772280  317730 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-617681
	I1025 09:46:38.790316  317730 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:46:38.790389  317730 ssh_runner.go:195] Run: cat /version.json
	I1025 09:46:38.790434  317730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-617681
	I1025 09:46:38.790434  317730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-617681
	I1025 09:46:38.812040  317730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/NoKubernetes-617681/id_rsa Username:docker}
	I1025 09:46:38.813438  317730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/NoKubernetes-617681/id_rsa Username:docker}
	I1025 09:46:38.984217  317730 ssh_runner.go:195] Run: systemctl --version
	I1025 09:46:38.992234  317730 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:46:39.038661  317730 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:46:39.045030  317730 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:46:39.045097  317730 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:46:39.077984  317730 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:46:39.078035  317730 start.go:495] detecting cgroup driver to use...
	I1025 09:46:39.078071  317730 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:46:39.078143  317730 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:46:39.096402  317730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:46:39.112482  317730 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:46:39.112531  317730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:46:39.138082  317730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:46:39.165737  317730 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:46:39.286434  317730 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:46:39.414508  317730 docker.go:234] disabling docker service ...
	I1025 09:46:39.414584  317730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:46:39.440541  317730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:46:39.458237  317730 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:46:39.566627  317730 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:46:39.668340  317730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:46:39.681578  317730 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:46:39.696659  317730 download.go:108] Downloading: https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm.sha1 -> /home/jenkins/minikube-integration/21794-130604/.minikube/cache/linux/amd64/v0.0.0/kubeadm
	I1025 09:46:40.225530  317730 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1025 09:46:40.225590  317730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:40.236781  317730 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:46:40.236903  317730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:40.246075  317730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:40.254825  317730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:40.263707  317730 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:46:40.271656  317730 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:46:40.279166  317730 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:46:40.286492  317730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:46:40.364447  317730 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:46:40.466948  317730 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:46:40.467007  317730 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:46:40.471122  317730 start.go:563] Will wait 60s for crictl version
	I1025 09:46:40.471199  317730 ssh_runner.go:195] Run: which crictl
	I1025 09:46:40.474838  317730 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:46:40.499313  317730 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:46:40.499437  317730 ssh_runner.go:195] Run: crio --version
	I1025 09:46:40.528755  317730 ssh_runner.go:195] Run: crio --version
	I1025 09:46:40.559677  317730 out.go:179] * Preparing CRI-O 1.34.1 ...
	I1025 09:46:40.560846  317730 ssh_runner.go:195] Run: rm -f paused
	I1025 09:46:40.566019  317730 out.go:179] * Done! minikube is ready without Kubernetes!
	I1025 09:46:40.568576  317730 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:46:38.970770  317940 cli_runner.go:164] Run: docker container inspect force-systemd-flag-170120 --format={{.State.Running}}
	I1025 09:46:38.991274  317940 cli_runner.go:164] Run: docker container inspect force-systemd-flag-170120 --format={{.State.Status}}
	I1025 09:46:39.015759  317940 cli_runner.go:164] Run: docker exec force-systemd-flag-170120 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:46:39.070657  317940 oci.go:144] the created container "force-systemd-flag-170120" has a running status.
	I1025 09:46:39.070689  317940 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/force-systemd-flag-170120/id_rsa...
	I1025 09:46:39.419648  317940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/force-systemd-flag-170120/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1025 09:46:39.419771  317940 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21794-130604/.minikube/machines/force-systemd-flag-170120/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:46:39.450431  317940 cli_runner.go:164] Run: docker container inspect force-systemd-flag-170120 --format={{.State.Status}}
	I1025 09:46:39.469867  317940 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:46:39.469893  317940 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-170120 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:46:39.524057  317940 cli_runner.go:164] Run: docker container inspect force-systemd-flag-170120 --format={{.State.Status}}
	I1025 09:46:39.544271  317940 machine.go:93] provisionDockerMachine start ...
	I1025 09:46:39.544392  317940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-170120
	I1025 09:46:39.566382  317940 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:39.566680  317940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1025 09:46:39.566700  317940 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:46:39.715736  317940 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-170120
	
	I1025 09:46:39.715764  317940 ubuntu.go:182] provisioning hostname "force-systemd-flag-170120"
	I1025 09:46:39.715834  317940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-170120
	I1025 09:46:39.734129  317940 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:39.734333  317940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1025 09:46:39.734369  317940 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-170120 && echo "force-systemd-flag-170120" | sudo tee /etc/hostname
	I1025 09:46:39.885664  317940 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-170120
	
	I1025 09:46:39.885749  317940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-170120
	I1025 09:46:39.903554  317940 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:39.903798  317940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1025 09:46:39.903816  317940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-170120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-170120/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-170120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:46:40.045178  317940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:46:40.045208  317940 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 09:46:40.045239  317940 ubuntu.go:190] setting up certificates
	I1025 09:46:40.045253  317940 provision.go:84] configureAuth start
	I1025 09:46:40.045432  317940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-170120
	I1025 09:46:40.063755  317940 provision.go:143] copyHostCerts
	I1025 09:46:40.063811  317940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:46:40.063851  317940 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem, removing ...
	I1025 09:46:40.063863  317940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:46:40.063949  317940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 09:46:40.064063  317940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:46:40.064095  317940 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem, removing ...
	I1025 09:46:40.064105  317940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:46:40.064153  317940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 09:46:40.064242  317940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:46:40.064268  317940 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem, removing ...
	I1025 09:46:40.064274  317940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:46:40.064311  317940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 09:46:40.064415  317940 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-170120 san=[127.0.0.1 192.168.94.2 force-systemd-flag-170120 localhost minikube]
	I1025 09:46:40.195325  317940 provision.go:177] copyRemoteCerts
	I1025 09:46:40.195397  317940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:46:40.195445  317940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-170120
	I1025 09:46:40.213800  317940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/force-systemd-flag-170120/id_rsa Username:docker}
	I1025 09:46:40.316585  317940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 09:46:40.316665  317940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:46:40.337076  317940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 09:46:40.337151  317940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1025 09:46:40.354419  317940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 09:46:40.354479  317940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:46:40.373502  317940 provision.go:87] duration metric: took 328.228996ms to configureAuth
	I1025 09:46:40.373551  317940 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:46:40.373729  317940 config.go:182] Loaded profile config "force-systemd-flag-170120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:46:40.373827  317940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-170120
	I1025 09:46:40.392957  317940 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:40.393232  317940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1025 09:46:40.393255  317940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:46:40.652807  317940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:46:40.652832  317940 machine.go:96] duration metric: took 1.108534618s to provisionDockerMachine
	I1025 09:46:40.652844  317940 client.go:171] duration metric: took 6.483090589s to LocalClient.Create
	I1025 09:46:40.652866  317940 start.go:167] duration metric: took 6.483155534s to libmachine.API.Create "force-systemd-flag-170120"
	I1025 09:46:40.652892  317940 start.go:293] postStartSetup for "force-systemd-flag-170120" (driver="docker")
	I1025 09:46:40.652906  317940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:46:40.652969  317940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:46:40.653019  317940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-170120
	I1025 09:46:40.678881  317940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/force-systemd-flag-170120/id_rsa Username:docker}
	I1025 09:46:40.790037  317940 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:46:40.795186  317940 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:46:40.795218  317940 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:46:40.795237  317940 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 09:46:40.795286  317940 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 09:46:40.795414  317940 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> 1341452.pem in /etc/ssl/certs
	I1025 09:46:40.795432  317940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> /etc/ssl/certs/1341452.pem
	I1025 09:46:40.795542  317940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:46:40.803642  317940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:46:40.826224  317940 start.go:296] duration metric: took 173.311571ms for postStartSetup
	I1025 09:46:40.826711  317940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-170120
	I1025 09:46:40.846025  317940 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/force-systemd-flag-170120/config.json ...
	I1025 09:46:40.846395  317940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:46:40.846455  317940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-170120
	I1025 09:46:40.865272  317940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/force-systemd-flag-170120/id_rsa Username:docker}
	I1025 09:46:40.968760  317940 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:46:40.974082  317940 start.go:128] duration metric: took 6.809138312s to createHost
	I1025 09:46:40.974106  317940 start.go:83] releasing machines lock for "force-systemd-flag-170120", held for 6.809279097s
	I1025 09:46:40.974167  317940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-170120
	I1025 09:46:40.993882  317940 ssh_runner.go:195] Run: cat /version.json
	I1025 09:46:40.993948  317940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-170120
	I1025 09:46:40.993983  317940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:46:40.994050  317940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-170120
	I1025 09:46:41.015971  317940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/force-systemd-flag-170120/id_rsa Username:docker}
	I1025 09:46:41.016421  317940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/force-systemd-flag-170120/id_rsa Username:docker}
	I1025 09:46:41.186245  317940 ssh_runner.go:195] Run: systemctl --version
	I1025 09:46:41.196850  317940 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:46:41.239943  317940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:46:41.244952  317940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:46:41.245010  317940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:46:41.280989  317940 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:46:41.281011  317940 start.go:495] detecting cgroup driver to use...
	I1025 09:46:41.281026  317940 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1025 09:46:41.281092  317940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:46:41.301240  317940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:46:41.316009  317940 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:46:41.316066  317940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:46:41.333249  317940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:46:41.353962  317940 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:46:41.454306  317940 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:46:41.554475  317940 docker.go:234] disabling docker service ...
	I1025 09:46:41.554536  317940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:46:41.574441  317940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:46:41.588212  317940 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:46:41.681116  317940 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:46:41.781444  317940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:46:41.794873  317940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:46:41.811087  317940 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:46:41.811140  317940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:41.821177  317940 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:46:41.821261  317940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:41.830295  317940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:41.839074  317940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:41.848261  317940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:46:41.856994  317940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:41.866953  317940 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:41.883141  317940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:41.893371  317940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:46:41.901831  317940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:46:41.910023  317940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:46:42.011004  317940 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:46:42.126692  317940 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:46:42.126766  317940 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:46:42.130782  317940 start.go:563] Will wait 60s for crictl version
	I1025 09:46:42.130845  317940 ssh_runner.go:195] Run: which crictl
	I1025 09:46:42.134646  317940 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:46:42.160555  317940 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:46:42.160672  317940 ssh_runner.go:195] Run: crio --version
	I1025 09:46:42.192085  317940 ssh_runner.go:195] Run: crio --version
	I1025 09:46:42.227753  317940 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.618933768Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.619886913Z" level=info msg="Conmon does support the --sync option"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.619907706Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.619920721Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.620656225Z" level=info msg="Conmon does support the --sync option"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.620669023Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.624685856Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.624724417Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.625680719Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.626884527Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.626955323Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.63356263Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.679065046Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-s4rnp Namespace:kube-system ID:d3362595f8ac943d418bc941dc43e579c44004f86352196d05c1490003346646 UID:22bdeb9a-e672-4e75-8488-afecc2d96283 NetNS:/var/run/netns/5b4313e0-d83b-4050-b281-04eb70ba2214 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a358}] Aliases:map[]}"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.679238888Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-s4rnp for CNI network kindnet (type=ptp)"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.67969442Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.679724374Z" level=info msg="Starting seccomp notifier watcher"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.679775387Z" level=info msg="Create NRI interface"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.679910071Z" level=info msg="built-in NRI default validator is disabled"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.679922595Z" level=info msg="runtime interface created"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.679935649Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.679943542Z" level=info msg="runtime interface starting up..."
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.679950985Z" level=info msg="starting plugins..."
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.679965945Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 25 09:46:35 pause-175355 crio[2148]: time="2025-10-25T09:46:35.680246724Z" level=info msg="No systemd watchdog enabled"
	Oct 25 09:46:35 pause-175355 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	9e2c562768d3e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   0                   d3362595f8ac9       coredns-66bc5c9577-s4rnp               kube-system
	a03995ac4fccc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   24 seconds ago      Running             kindnet-cni               0                   548e91cfdb6a5       kindnet-zb6d9                          kube-system
	5a8d746e97640       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   24 seconds ago      Running             kube-proxy                0                   f31338de3607b       kube-proxy-cvr5p                       kube-system
	ad3565a09ce22       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   36 seconds ago      Running             kube-apiserver            0                   bf58aa99a937a       kube-apiserver-pause-175355            kube-system
	7d005017d0bf2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   36 seconds ago      Running             kube-controller-manager   0                   204b2441428e2       kube-controller-manager-pause-175355   kube-system
	6c4196345d8b0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   36 seconds ago      Running             etcd                      0                   a01625675176d       etcd-pause-175355                      kube-system
	e5339095bdcaf       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   36 seconds ago      Running             kube-scheduler            0                   cc38b7d321f0c       kube-scheduler-pause-175355            kube-system
	
	
	==> coredns [9e2c562768d3ef40e165f634b295addfa5cb5274c101de18e0ab8490c95e8116] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33829 - 49078 "HINFO IN 7629899892878268267.4588495436628105644. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069677976s
	
	
	==> describe nodes <==
	Name:               pause-175355
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-175355
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=pause-175355
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_46_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:46:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-175355
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:46:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:46:33 +0000   Sat, 25 Oct 2025 09:46:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:46:33 +0000   Sat, 25 Oct 2025 09:46:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:46:33 +0000   Sat, 25 Oct 2025 09:46:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:46:33 +0000   Sat, 25 Oct 2025 09:46:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-175355
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                c92ea8b4-95fb-49fa-ad4c-542f024d133c
	  Boot ID:                    69cac88c-fbae-449a-9884-8eb99653f5b9
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://Unknown
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-s4rnp                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-175355                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-zb6d9                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-175355             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-pause-175355    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-cvr5p                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-175355             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node pause-175355 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node pause-175355 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node pause-175355 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node pause-175355 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node pause-175355 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node pause-175355 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node pause-175355 event: Registered Node pause-175355 in Controller
	  Normal  NodeReady                14s                kubelet          Node pause-175355 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 1c f5 68 9f 00 08 06
	[  +4.451388] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0e 07 4a e3 be 93 08 06
	[Oct25 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.025995] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.023888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.023905] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.024896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +1.022924] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +2.047850] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +4.031640] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[  +8.511323] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[ +16.382644] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[Oct25 09:03] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	
	
	==> etcd [6c4196345d8b07529fb0ecb3b623137f5db068aabe551ff91243a064f9e8040e] <==
	{"level":"warn","ts":"2025-10-25T09:46:09.414012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.434497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.479648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.488509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.506691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.524989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.542962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.555970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.574163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.592521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.612609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.624494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.636675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.648334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.665335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.683495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.691593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.707718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.722326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.737064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.744869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.762858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.775697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.789032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:46:09.855692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57450","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:46:43 up  1:29,  0 user,  load average: 4.52, 1.66, 1.15
	Linux pause-175355 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a03995ac4fccce122a9c1002f2459b9981e66d7e2725203dabc6c68c7494cc34] <==
	I1025 09:46:19.071336       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:46:19.071674       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 09:46:19.071801       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:46:19.071816       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:46:19.071835       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:46:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:46:19.272096       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:46:19.272164       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:46:19.272190       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:46:19.272861       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:46:19.622934       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:46:19.623080       1 metrics.go:72] Registering metrics
	I1025 09:46:19.623146       1 controller.go:711] "Syncing nftables rules"
	I1025 09:46:29.274048       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:46:29.274105       1 main.go:301] handling current node
	I1025 09:46:39.276482       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:46:39.276535       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ad3565a09ce225576f9b373c237fdbcf567fad3f696a31c2511744550b317595] <==
	I1025 09:46:10.442898       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:46:10.443783       1 controller.go:667] quota admission added evaluator for: namespaces
	E1025 09:46:10.444666       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1025 09:46:10.446473       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:46:10.446575       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 09:46:10.456749       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:46:10.457601       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:46:10.647701       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:46:11.343737       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 09:46:11.347339       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 09:46:11.347374       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:46:11.852233       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:46:11.892631       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:46:11.951774       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 09:46:11.961428       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1025 09:46:11.962721       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:46:11.968608       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:46:12.376411       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:46:13.063244       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:46:13.080774       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 09:46:13.100421       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:46:18.028165       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:46:18.078788       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:46:18.084446       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:46:18.427107       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [7d005017d0bf28c57b242a453d705732d305644fa622751689bb381580ae0cc9] <==
	I1025 09:46:17.373277       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:46:17.373613       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:46:17.373718       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 09:46:17.373780       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:46:17.373849       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:46:17.373617       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 09:46:17.373992       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:46:17.374256       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:46:17.374276       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:46:17.374290       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:46:17.375656       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:46:17.380655       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:46:17.380696       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:46:17.388688       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:46:17.391900       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:46:17.393430       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:46:17.396570       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:46:17.403801       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:46:17.411441       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:46:17.422275       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:46:17.422411       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:46:17.423271       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:46:17.423301       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:46:17.428960       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:46:32.375185       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5a8d746e97640c6b363685a4b3c2c1a0914d9ab9fe85a43546d4e990f24b8958] <==
	I1025 09:46:18.870468       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:46:18.931708       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:46:19.032455       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:46:19.032492       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 09:46:19.032610       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:46:19.052286       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:46:19.052342       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:46:19.057679       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:46:19.057996       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:46:19.058023       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:46:19.059096       1 config.go:200] "Starting service config controller"
	I1025 09:46:19.059119       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:46:19.059129       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:46:19.059139       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:46:19.059226       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:46:19.059250       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:46:19.059278       1 config.go:309] "Starting node config controller"
	I1025 09:46:19.059293       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:46:19.059304       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:46:19.159283       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:46:19.159408       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:46:19.159407       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e5339095bdcaff6188dc8b28e527f13dbdfc4e3e0f4daa38be912228b489c18d] <==
	E1025 09:46:10.420758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:46:10.420862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:46:10.420906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:46:10.420985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:46:10.420990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:46:10.421080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:46:10.421155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:46:10.420050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:46:10.421397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:46:10.421859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:46:10.422090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:46:10.422502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:46:10.422567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:46:11.284890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:46:11.312434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:46:11.388739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:46:11.396990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:46:11.444169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:46:11.461599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:46:11.473914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:46:11.512037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:46:11.519519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:46:11.661037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:46:11.716816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1025 09:46:13.716046       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:46:13 pause-175355 kubelet[1296]: E1025 09:46:13.973555    1296 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-175355\" already exists" pod="kube-system/kube-apiserver-pause-175355"
	Oct 25 09:46:13 pause-175355 kubelet[1296]: E1025 09:46:13.974522    1296 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-175355\" already exists" pod="kube-system/kube-controller-manager-pause-175355"
	Oct 25 09:46:13 pause-175355 kubelet[1296]: I1025 09:46:13.984139    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-175355" podStartSLOduration=0.984115342 podStartE2EDuration="984.115342ms" podCreationTimestamp="2025-10-25 09:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:46:13.973587229 +0000 UTC m=+1.152177649" watchObservedRunningTime="2025-10-25 09:46:13.984115342 +0000 UTC m=+1.162705755"
	Oct 25 09:46:17 pause-175355 kubelet[1296]: I1025 09:46:17.395980    1296 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 09:46:17 pause-175355 kubelet[1296]: I1025 09:46:17.397309    1296 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 09:46:18 pause-175355 kubelet[1296]: I1025 09:46:18.539610    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3a2e3dc-2eb5-4b07-90c7-5b06ad4dc480-lib-modules\") pod \"kube-proxy-cvr5p\" (UID: \"a3a2e3dc-2eb5-4b07-90c7-5b06ad4dc480\") " pod="kube-system/kube-proxy-cvr5p"
	Oct 25 09:46:18 pause-175355 kubelet[1296]: I1025 09:46:18.539653    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8czsd\" (UniqueName: \"kubernetes.io/projected/c5542753-32da-4749-bfdd-948d337adf13-kube-api-access-8czsd\") pod \"kindnet-zb6d9\" (UID: \"c5542753-32da-4749-bfdd-948d337adf13\") " pod="kube-system/kindnet-zb6d9"
	Oct 25 09:46:18 pause-175355 kubelet[1296]: I1025 09:46:18.539673    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5542753-32da-4749-bfdd-948d337adf13-lib-modules\") pod \"kindnet-zb6d9\" (UID: \"c5542753-32da-4749-bfdd-948d337adf13\") " pod="kube-system/kindnet-zb6d9"
	Oct 25 09:46:18 pause-175355 kubelet[1296]: I1025 09:46:18.539688    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a3a2e3dc-2eb5-4b07-90c7-5b06ad4dc480-kube-proxy\") pod \"kube-proxy-cvr5p\" (UID: \"a3a2e3dc-2eb5-4b07-90c7-5b06ad4dc480\") " pod="kube-system/kube-proxy-cvr5p"
	Oct 25 09:46:18 pause-175355 kubelet[1296]: I1025 09:46:18.539705    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3a2e3dc-2eb5-4b07-90c7-5b06ad4dc480-xtables-lock\") pod \"kube-proxy-cvr5p\" (UID: \"a3a2e3dc-2eb5-4b07-90c7-5b06ad4dc480\") " pod="kube-system/kube-proxy-cvr5p"
	Oct 25 09:46:18 pause-175355 kubelet[1296]: I1025 09:46:18.539722    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qtfk\" (UniqueName: \"kubernetes.io/projected/a3a2e3dc-2eb5-4b07-90c7-5b06ad4dc480-kube-api-access-5qtfk\") pod \"kube-proxy-cvr5p\" (UID: \"a3a2e3dc-2eb5-4b07-90c7-5b06ad4dc480\") " pod="kube-system/kube-proxy-cvr5p"
	Oct 25 09:46:18 pause-175355 kubelet[1296]: I1025 09:46:18.539742    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c5542753-32da-4749-bfdd-948d337adf13-cni-cfg\") pod \"kindnet-zb6d9\" (UID: \"c5542753-32da-4749-bfdd-948d337adf13\") " pod="kube-system/kindnet-zb6d9"
	Oct 25 09:46:18 pause-175355 kubelet[1296]: I1025 09:46:18.539761    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5542753-32da-4749-bfdd-948d337adf13-xtables-lock\") pod \"kindnet-zb6d9\" (UID: \"c5542753-32da-4749-bfdd-948d337adf13\") " pod="kube-system/kindnet-zb6d9"
	Oct 25 09:46:18 pause-175355 kubelet[1296]: I1025 09:46:18.997676    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-zb6d9" podStartSLOduration=0.99765176 podStartE2EDuration="997.65176ms" podCreationTimestamp="2025-10-25 09:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:46:18.985723957 +0000 UTC m=+6.164314378" watchObservedRunningTime="2025-10-25 09:46:18.99765176 +0000 UTC m=+6.176242193"
	Oct 25 09:46:19 pause-175355 kubelet[1296]: I1025 09:46:19.007793    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cvr5p" podStartSLOduration=1.007769213 podStartE2EDuration="1.007769213s" podCreationTimestamp="2025-10-25 09:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:46:19.007615461 +0000 UTC m=+6.186205883" watchObservedRunningTime="2025-10-25 09:46:19.007769213 +0000 UTC m=+6.186359634"
	Oct 25 09:46:29 pause-175355 kubelet[1296]: I1025 09:46:29.437512    1296 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 09:46:29 pause-175355 kubelet[1296]: I1025 09:46:29.520001    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22bdeb9a-e672-4e75-8488-afecc2d96283-config-volume\") pod \"coredns-66bc5c9577-s4rnp\" (UID: \"22bdeb9a-e672-4e75-8488-afecc2d96283\") " pod="kube-system/coredns-66bc5c9577-s4rnp"
	Oct 25 09:46:29 pause-175355 kubelet[1296]: I1025 09:46:29.520057    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtvbn\" (UniqueName: \"kubernetes.io/projected/22bdeb9a-e672-4e75-8488-afecc2d96283-kube-api-access-rtvbn\") pod \"coredns-66bc5c9577-s4rnp\" (UID: \"22bdeb9a-e672-4e75-8488-afecc2d96283\") " pod="kube-system/coredns-66bc5c9577-s4rnp"
	Oct 25 09:46:30 pause-175355 kubelet[1296]: I1025 09:46:30.012527    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-s4rnp" podStartSLOduration=12.012503308 podStartE2EDuration="12.012503308s" podCreationTimestamp="2025-10-25 09:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:46:30.012151267 +0000 UTC m=+17.190741690" watchObservedRunningTime="2025-10-25 09:46:30.012503308 +0000 UTC m=+17.191093729"
	Oct 25 09:46:33 pause-175355 kubelet[1296]: W1025 09:46:33.608974    1296 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 25 09:46:33 pause-175355 kubelet[1296]: E1025 09:46:33.609110    1296 log.go:32] "Version from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 25 09:46:39 pause-175355 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:46:39 pause-175355 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:46:39 pause-175355 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 09:46:39 pause-175355 systemd[1]: kubelet.service: Consumed 1.179s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-175355 -n pause-175355
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-175355 -n pause-175355: exit status 2 (366.260711ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-175355 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-042675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-042675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (277.801685ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:53:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-042675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-042675
helpers_test.go:243: (dbg) docker inspect newest-cni-042675:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce",
	        "Created": "2025-10-25T09:52:44.327443817Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 427684,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:52:44.378582041Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce/hostname",
	        "HostsPath": "/var/lib/docker/containers/3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce/hosts",
	        "LogPath": "/var/lib/docker/containers/3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce/3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce-json.log",
	        "Name": "/newest-cni-042675",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-042675:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-042675",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce",
	                "LowerDir": "/var/lib/docker/overlay2/22ae4172cbe3c43d98e8b23c6d4928d84d681a598f6ccb09273b14bd2d20ccfb-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/22ae4172cbe3c43d98e8b23c6d4928d84d681a598f6ccb09273b14bd2d20ccfb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/22ae4172cbe3c43d98e8b23c6d4928d84d681a598f6ccb09273b14bd2d20ccfb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/22ae4172cbe3c43d98e8b23c6d4928d84d681a598f6ccb09273b14bd2d20ccfb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-042675",
	                "Source": "/var/lib/docker/volumes/newest-cni-042675/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-042675",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-042675",
	                "name.minikube.sigs.k8s.io": "newest-cni-042675",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "68e699fd6c33fc8d61914a0f65b6f8cf96bf492006e1e44b9429fe38fccd11ae",
	            "SandboxKey": "/var/run/docker/netns/68e699fd6c33",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33225"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33226"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33229"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33227"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33228"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-042675": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:65:79:63:fe:2d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a3ae4e80fdc178e1b920fe2d5b1786ace400be5b54cd55cc0897dd02ba348996",
	                    "EndpointID": "c43c6ccf2225bc804113818028a59887f65a0b674e21970587691291da96273e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-042675",
	                        "3a2253343bb2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-042675 -n newest-cni-042675
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-042675 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-042675 logs -n 25: (1.071482484s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p enable-default-cni-035825 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                                            │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                            │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl status docker --all --full --no-pager                                                                                                                                                             │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl cat docker --no-pager                                                                                                                                                                             │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo cat /etc/docker/daemon.json                                                                                                                                                                                 │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p enable-default-cni-035825 sudo docker system info                                                                                                                                                                                          │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                         │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                         │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                    │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p enable-default-cni-035825 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                              │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo cri-dockerd --version                                                                                                                                                                                       │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                         │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-129588                                                                                                                                                                                                                  │ kubernetes-upgrade-129588    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl cat containerd --no-pager                                                                                                                                                                         │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                  │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo cat /etc/containerd/config.toml                                                                                                                                                                             │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo containerd config dump                                                                                                                                                                                      │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl status crio --all --full --no-pager                                                                                                                                                               │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl cat crio --no-pager                                                                                                                                                                               │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                     │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo crio config                                                                                                                                                                                                 │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ start   │ -p default-k8s-diff-port-880773 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │                     │
	│ delete  │ -p enable-default-cni-035825                                                                                                                                                                                                                  │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ start   │ -p newest-cni-042675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-042675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:52:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:52:38.183627  425060 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:52:38.183940  425060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:52:38.183951  425060 out.go:374] Setting ErrFile to fd 2...
	I1025 09:52:38.183957  425060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:52:38.184272  425060 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:52:38.184941  425060 out.go:368] Setting JSON to false
	I1025 09:52:38.186500  425060 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5702,"bootTime":1761380256,"procs":355,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:52:38.186623  425060 start.go:141] virtualization: kvm guest
	I1025 09:52:38.267441  425060 out.go:179] * [newest-cni-042675] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:52:38.311834  425060 notify.go:220] Checking for updates...
	I1025 09:52:38.311885  425060 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:52:38.401333  425060 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:52:38.402726  425060 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:52:38.403747  425060 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:52:38.405961  425060 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:52:38.407521  425060 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:52:38.409595  425060 config.go:182] Loaded profile config "default-k8s-diff-port-880773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:52:38.409739  425060 config.go:182] Loaded profile config "no-preload-656799": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:52:38.409833  425060 config.go:182] Loaded profile config "old-k8s-version-676314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 09:52:38.409938  425060 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:52:38.438620  425060 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:52:38.438715  425060 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:52:38.519193  425060 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-25 09:52:38.507645671 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:52:38.519408  425060 docker.go:318] overlay module found
	I1025 09:52:38.522715  425060 out.go:179] * Using the docker driver based on user configuration
	I1025 09:52:38.524455  425060 start.go:305] selected driver: docker
	I1025 09:52:38.524473  425060 start.go:925] validating driver "docker" against <nil>
	I1025 09:52:38.524485  425060 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:52:38.525088  425060 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:52:38.600038  425060 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-25 09:52:38.587785444 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:52:38.600293  425060 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1025 09:52:38.600337  425060 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1025 09:52:38.600623  425060 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 09:52:38.602664  425060 out.go:179] * Using Docker driver with root privileges
	I1025 09:52:38.603618  425060 cni.go:84] Creating CNI manager for ""
	I1025 09:52:38.603749  425060 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:52:38.603762  425060 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:52:38.603873  425060 start.go:349] cluster config:
	{Name:newest-cni-042675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-042675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:52:38.605286  425060 out.go:179] * Starting "newest-cni-042675" primary control-plane node in "newest-cni-042675" cluster
	I1025 09:52:38.606734  425060 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:52:38.607875  425060 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:52:33.844331  423245 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:52:33.844618  423245 start.go:159] libmachine.API.Create for "default-k8s-diff-port-880773" (driver="docker")
	I1025 09:52:33.844657  423245 client.go:168] LocalClient.Create starting
	I1025 09:52:33.844757  423245 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem
	I1025 09:52:33.844801  423245 main.go:141] libmachine: Decoding PEM data...
	I1025 09:52:33.844822  423245 main.go:141] libmachine: Parsing certificate...
	I1025 09:52:33.844898  423245 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem
	I1025 09:52:33.844935  423245 main.go:141] libmachine: Decoding PEM data...
	I1025 09:52:33.844952  423245 main.go:141] libmachine: Parsing certificate...
	I1025 09:52:33.845463  423245 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-880773 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:52:33.865360  423245 cli_runner.go:211] docker network inspect default-k8s-diff-port-880773 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:52:33.865457  423245 network_create.go:284] running [docker network inspect default-k8s-diff-port-880773] to gather additional debugging logs...
	I1025 09:52:33.865487  423245 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-880773
	W1025 09:52:33.885198  423245 cli_runner.go:211] docker network inspect default-k8s-diff-port-880773 returned with exit code 1
	I1025 09:52:33.885235  423245 network_create.go:287] error running [docker network inspect default-k8s-diff-port-880773]: docker network inspect default-k8s-diff-port-880773: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-880773 not found
	I1025 09:52:33.885252  423245 network_create.go:289] output of [docker network inspect default-k8s-diff-port-880773]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-880773 not found
	
	** /stderr **
	I1025 09:52:33.885389  423245 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:52:33.904319  423245 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b89a58b7fce0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:e2:93:21:98:bc} reservation:<nil>}
	I1025 09:52:33.905107  423245 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4482374e86a6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:20:65:c1:4a:19} reservation:<nil>}
	I1025 09:52:33.906078  423245 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7323bc384751 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:33:7f:07:f5:30} reservation:<nil>}
	I1025 09:52:33.906883  423245 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c5f8d7127b2a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fe:aa:5b:a1:8d:1b} reservation:<nil>}
	I1025 09:52:33.907954  423245 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-f66217c06b76 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:e6:1a:ac:ee:2c:d7} reservation:<nil>}
	I1025 09:52:33.910245  423245 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c408b0}
	I1025 09:52:33.910276  423245 network_create.go:124] attempt to create docker network default-k8s-diff-port-880773 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1025 09:52:33.910330  423245 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-880773 default-k8s-diff-port-880773
	I1025 09:52:33.971660  423245 network_create.go:108] docker network default-k8s-diff-port-880773 192.168.94.0/24 created
	I1025 09:52:33.971700  423245 kic.go:121] calculated static IP "192.168.94.2" for the "default-k8s-diff-port-880773" container
	I1025 09:52:33.971777  423245 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:52:33.991701  423245 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-880773 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-880773 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:52:34.012109  423245 oci.go:103] Successfully created a docker volume default-k8s-diff-port-880773
	I1025 09:52:34.012214  423245 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-880773-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-880773 --entrypoint /usr/bin/test -v default-k8s-diff-port-880773:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:52:34.493307  423245 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-880773
	I1025 09:52:34.493384  423245 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:52:34.493415  423245 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:52:34.493500  423245 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-880773:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 09:52:38.418021  423245 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-880773:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (3.924457054s)
	I1025 09:52:38.418070  423245 kic.go:203] duration metric: took 3.924652933s to extract preloaded images to volume ...
	W1025 09:52:38.418169  423245 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:52:38.418218  423245 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:52:38.418262  423245 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:52:38.501120  423245 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-880773 --name default-k8s-diff-port-880773 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-880773 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-880773 --network default-k8s-diff-port-880773 --ip 192.168.94.2 --volume default-k8s-diff-port-880773:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:52:38.608903  425060 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:52:38.608949  425060 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:52:38.608968  425060 cache.go:58] Caching tarball of preloaded images
	I1025 09:52:38.608995  425060 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:52:38.609061  425060 preload.go:233] Found /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:52:38.609075  425060 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:52:38.609192  425060 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/config.json ...
	I1025 09:52:38.609217  425060 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/config.json: {Name:mk3a978ad2a97562940bbea05747eeb5fd9fabed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:38.631257  425060 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:52:38.631278  425060 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:52:38.631299  425060 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:52:38.631360  425060 start.go:360] acquireMachinesLock for newest-cni-042675: {Name:mk7919472b767e9cb704209265f0c08926368ab3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:52:38.631485  425060 start.go:364] duration metric: took 99.56µs to acquireMachinesLock for "newest-cni-042675"
	I1025 09:52:38.631519  425060 start.go:93] Provisioning new machine with config: &{Name:newest-cni-042675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-042675 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:52:38.631634  425060 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:52:36.360517  417881 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:52:36.360545  417881 start.go:495] detecting cgroup driver to use...
	I1025 09:52:36.360578  417881 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:52:36.360627  417881 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:52:36.380804  417881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:52:36.394531  417881 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:52:36.394590  417881 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:52:36.414238  417881 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:52:36.435263  417881 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:52:36.550064  417881 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:52:36.802131  417881 docker.go:234] disabling docker service ...
	I1025 09:52:36.802201  417881 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:52:36.824588  417881 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:52:36.839105  417881 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:52:37.084830  417881 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:52:37.179715  417881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:52:37.193872  417881 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:52:37.209326  417881 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:52:37.209410  417881 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:37.225254  417881 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:52:37.225342  417881 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:37.235020  417881 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:37.244913  417881 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:37.254215  417881 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:52:37.262702  417881 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:37.272120  417881 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:37.287204  417881 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:37.296819  417881 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:52:37.304653  417881 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:52:37.313372  417881 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:52:37.418014  417881 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:52:38.525908  417881 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.107858619s)
	I1025 09:52:38.525937  417881 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:52:38.525989  417881 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:52:38.531587  417881 start.go:563] Will wait 60s for crictl version
	I1025 09:52:38.531655  417881 ssh_runner.go:195] Run: which crictl
	I1025 09:52:38.536446  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:52:38.573368  417881 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:52:38.573448  417881 ssh_runner.go:195] Run: crio --version
	I1025 09:52:38.611949  417881 ssh_runner.go:195] Run: crio --version
	I1025 09:52:38.655539  417881 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:52:36.215030  416663 out.go:252]   - Generating certificates and keys ...
	I1025 09:52:36.215164  416663 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:52:36.215257  416663 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:52:36.309231  416663 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:52:36.494289  416663 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:52:36.634874  416663 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:52:36.760930  416663 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:52:37.115118  416663 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:52:37.115295  416663 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-676314] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 09:52:37.626415  416663 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:52:37.639647  416663 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-676314] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 09:52:37.899652  416663 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:52:38.099534  416663 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:52:38.276135  416663 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:52:38.276238  416663 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:52:38.419886  416663 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:52:38.783024  416663 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:52:38.841922  416663 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:52:39.065739  416663 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:52:39.070318  416663 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:52:39.077843  416663 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:52:39.079567  416663 out.go:252]   - Booting up control plane ...
	I1025 09:52:39.079787  416663 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:52:39.082717  416663 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:52:39.082817  416663 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:52:39.103620  416663 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:52:39.111118  416663 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:52:39.114989  416663 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:52:39.323376  416663 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 09:52:38.658298  417881 cli_runner.go:164] Run: docker network inspect no-preload-656799 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:52:38.678292  417881 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 09:52:38.683810  417881 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:52:38.698134  417881 kubeadm.go:883] updating cluster {Name:no-preload-656799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-656799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:52:38.698295  417881 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:52:38.698359  417881 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:52:38.728673  417881 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1025 09:52:38.728702  417881 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 09:52:38.728755  417881 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:52:38.728871  417881 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 09:52:38.728988  417881 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1025 09:52:38.729116  417881 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 09:52:38.729125  417881 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1025 09:52:38.729225  417881 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 09:52:38.729273  417881 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 09:52:38.729439  417881 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 09:52:38.730997  417881 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 09:52:38.731030  417881 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 09:52:38.730997  417881 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1025 09:52:38.731048  417881 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 09:52:38.731056  417881 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1025 09:52:38.731509  417881 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 09:52:38.731860  417881 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 09:52:38.732170  417881 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:52:38.870820  417881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1025 09:52:38.882458  417881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1025 09:52:38.882593  417881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 09:52:38.891756  417881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1025 09:52:38.902645  417881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1025 09:52:38.905740  417881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1025 09:52:38.913303  417881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1025 09:52:38.947574  417881 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1025 09:52:38.947618  417881 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 09:52:38.947917  417881 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1025 09:52:38.947958  417881 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1025 09:52:38.948004  417881 ssh_runner.go:195] Run: which crictl
	I1025 09:52:38.948127  417881 ssh_runner.go:195] Run: which crictl
	I1025 09:52:38.960228  417881 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1025 09:52:38.960275  417881 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 09:52:38.960321  417881 ssh_runner.go:195] Run: which crictl
	I1025 09:52:38.982960  417881 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1025 09:52:38.983018  417881 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 09:52:38.983083  417881 ssh_runner.go:195] Run: which crictl
	I1025 09:52:38.984832  417881 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1025 09:52:38.984886  417881 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 09:52:38.984932  417881 ssh_runner.go:195] Run: which crictl
	I1025 09:52:38.992045  417881 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1025 09:52:38.992093  417881 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 09:52:38.992143  417881 ssh_runner.go:195] Run: which crictl
	I1025 09:52:38.994532  417881 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1025 09:52:38.994590  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1025 09:52:38.994638  417881 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1025 09:52:38.994685  417881 ssh_runner.go:195] Run: which crictl
	I1025 09:52:38.994703  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1025 09:52:38.994746  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 09:52:38.994800  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1025 09:52:38.994827  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1025 09:52:38.999810  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1025 09:52:39.062045  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1025 09:52:39.062130  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1025 09:52:39.062199  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 09:52:39.062319  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1025 09:52:39.081850  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1025 09:52:39.081994  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1025 09:52:39.082073  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1025 09:52:39.134226  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 09:52:39.134354  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1025 09:52:39.134427  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1025 09:52:39.134511  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1025 09:52:39.134583  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1025 09:52:39.173134  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1025 09:52:39.173270  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1025 09:52:39.217085  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1025 09:52:39.217209  417881 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1025 09:52:39.217286  417881 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1025 09:52:39.217668  417881 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1025 09:52:39.217782  417881 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1025 09:52:39.218020  417881 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1025 09:52:39.218233  417881 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1025 09:52:39.226922  417881 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1025 09:52:39.227038  417881 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1025 09:52:39.244944  417881 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1025 09:52:39.244907  417881 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1025 09:52:39.245232  417881 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1025 09:52:39.245311  417881 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1025 09:52:39.258178  417881 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1025 09:52:39.258208  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1025 09:52:39.258246  417881 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1025 09:52:39.258273  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1025 09:52:39.258286  417881 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1025 09:52:39.258299  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1025 09:52:39.258395  417881 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1025 09:52:39.258415  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1025 09:52:39.258485  417881 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1025 09:52:39.258486  417881 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1025 09:52:39.258500  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1025 09:52:39.258567  417881 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1025 09:52:39.258719  417881 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1025 09:52:39.258743  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1025 09:52:39.339893  417881 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1025 09:52:39.339941  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1025 09:52:39.527463  417881 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1025 09:52:39.527546  417881 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1025 09:52:39.773047  417881 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1025 09:52:39.848504  417881 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1025 09:52:39.848577  417881 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1025 09:52:40.167866  417881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:52:38.633541  425060 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:52:38.633829  425060 start.go:159] libmachine.API.Create for "newest-cni-042675" (driver="docker")
	I1025 09:52:38.633869  425060 client.go:168] LocalClient.Create starting
	I1025 09:52:38.633970  425060 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem
	I1025 09:52:38.634018  425060 main.go:141] libmachine: Decoding PEM data...
	I1025 09:52:38.634044  425060 main.go:141] libmachine: Parsing certificate...
	I1025 09:52:38.634122  425060 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem
	I1025 09:52:38.634149  425060 main.go:141] libmachine: Decoding PEM data...
	I1025 09:52:38.634164  425060 main.go:141] libmachine: Parsing certificate...
	I1025 09:52:38.634584  425060 cli_runner.go:164] Run: docker network inspect newest-cni-042675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:52:38.663893  425060 cli_runner.go:211] docker network inspect newest-cni-042675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:52:38.663967  425060 network_create.go:284] running [docker network inspect newest-cni-042675] to gather additional debugging logs...
	I1025 09:52:38.663992  425060 cli_runner.go:164] Run: docker network inspect newest-cni-042675
	W1025 09:52:38.683443  425060 cli_runner.go:211] docker network inspect newest-cni-042675 returned with exit code 1
	I1025 09:52:38.683482  425060 network_create.go:287] error running [docker network inspect newest-cni-042675]: docker network inspect newest-cni-042675: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-042675 not found
	I1025 09:52:38.683504  425060 network_create.go:289] output of [docker network inspect newest-cni-042675]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-042675 not found
	
	** /stderr **
	I1025 09:52:38.683635  425060 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:52:38.707226  425060 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b89a58b7fce0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:e2:93:21:98:bc} reservation:<nil>}
	I1025 09:52:38.708109  425060 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4482374e86a6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:20:65:c1:4a:19} reservation:<nil>}
	I1025 09:52:38.709191  425060 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7323bc384751 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:33:7f:07:f5:30} reservation:<nil>}
	I1025 09:52:38.709980  425060 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c5f8d7127b2a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fe:aa:5b:a1:8d:1b} reservation:<nil>}
	I1025 09:52:38.710609  425060 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-f66217c06b76 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:e6:1a:ac:ee:2c:d7} reservation:<nil>}
	I1025 09:52:38.711298  425060 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-6ddf7a97662f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:72:69:d2:ae:e7:13} reservation:<nil>}
	I1025 09:52:38.712178  425060 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e9dbe0}
	I1025 09:52:38.712216  425060 network_create.go:124] attempt to create docker network newest-cni-042675 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1025 09:52:38.712262  425060 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-042675 newest-cni-042675
	I1025 09:52:38.848778  425060 network_create.go:108] docker network newest-cni-042675 192.168.103.0/24 created
	I1025 09:52:38.848814  425060 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-042675" container
	I1025 09:52:38.848915  425060 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:52:38.869290  425060 cli_runner.go:164] Run: docker volume create newest-cni-042675 --label name.minikube.sigs.k8s.io=newest-cni-042675 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:52:38.897686  425060 oci.go:103] Successfully created a docker volume newest-cni-042675
	I1025 09:52:38.897871  425060 cli_runner.go:164] Run: docker run --rm --name newest-cni-042675-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-042675 --entrypoint /usr/bin/test -v newest-cni-042675:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:52:39.492577  425060 oci.go:107] Successfully prepared a docker volume newest-cni-042675
	I1025 09:52:39.492628  425060 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:52:39.492652  425060 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:52:39.492727  425060 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-042675:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 09:52:38.817980  423245 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Running}}
	I1025 09:52:38.839934  423245 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:52:38.861728  423245 cli_runner.go:164] Run: docker exec default-k8s-diff-port-880773 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:52:38.947383  423245 oci.go:144] the created container "default-k8s-diff-port-880773" has a running status.
	I1025 09:52:38.947466  423245 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa...
	I1025 09:52:39.421496  423245 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:52:39.463950  423245 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:52:39.491082  423245 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:52:39.491115  423245 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-880773 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:52:39.557132  423245 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:52:39.585889  423245 machine.go:93] provisionDockerMachine start ...
	I1025 09:52:39.586070  423245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:52:39.617443  423245 main.go:141] libmachine: Using SSH client type: native
	I1025 09:52:39.617800  423245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33220 <nil> <nil>}
	I1025 09:52:39.617818  423245 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:52:39.793137  423245 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-880773
	
	I1025 09:52:39.793168  423245 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-880773"
	I1025 09:52:39.793239  423245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:52:39.823437  423245 main.go:141] libmachine: Using SSH client type: native
	I1025 09:52:39.824281  423245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33220 <nil> <nil>}
	I1025 09:52:39.824325  423245 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-880773 && echo "default-k8s-diff-port-880773" | sudo tee /etc/hostname
	I1025 09:52:40.056061  423245 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-880773
	
	I1025 09:52:40.056149  423245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:52:40.075638  423245 main.go:141] libmachine: Using SSH client type: native
	I1025 09:52:40.075852  423245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33220 <nil> <nil>}
	I1025 09:52:40.075874  423245 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-880773' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-880773/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-880773' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:52:40.222694  423245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:52:40.222734  423245 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 09:52:40.222786  423245 ubuntu.go:190] setting up certificates
	I1025 09:52:40.222804  423245 provision.go:84] configureAuth start
	I1025 09:52:40.222865  423245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-880773
	I1025 09:52:40.245306  423245 provision.go:143] copyHostCerts
	I1025 09:52:40.245397  423245 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem, removing ...
	I1025 09:52:40.245414  423245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:52:40.245495  423245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 09:52:40.245614  423245 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem, removing ...
	I1025 09:52:40.245630  423245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:52:40.245675  423245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 09:52:40.245771  423245 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem, removing ...
	I1025 09:52:40.245783  423245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:52:40.245832  423245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 09:52:40.245908  423245 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-880773 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-880773 localhost minikube]
	I1025 09:52:40.665712  423245 provision.go:177] copyRemoteCerts
	I1025 09:52:40.665802  423245 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:52:40.665850  423245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:52:40.684779  423245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33220 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:52:40.790426  423245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:52:40.873800  423245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1025 09:52:40.896738  423245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:52:40.933729  423245 provision.go:87] duration metric: took 710.89973ms to configureAuth
	I1025 09:52:40.933762  423245 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:52:40.934046  423245 config.go:182] Loaded profile config "default-k8s-diff-port-880773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:52:40.934184  423245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:52:40.956431  423245 main.go:141] libmachine: Using SSH client type: native
	I1025 09:52:40.956764  423245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33220 <nil> <nil>}
	I1025 09:52:40.956793  423245 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:52:41.257164  423245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:52:41.257194  423245 machine.go:96] duration metric: took 1.671271169s to provisionDockerMachine
	I1025 09:52:41.257212  423245 client.go:171] duration metric: took 7.412543407s to LocalClient.Create
	I1025 09:52:41.257235  423245 start.go:167] duration metric: took 7.412619861s to libmachine.API.Create "default-k8s-diff-port-880773"
	I1025 09:52:41.257246  423245 start.go:293] postStartSetup for "default-k8s-diff-port-880773" (driver="docker")
	I1025 09:52:41.257259  423245 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:52:41.257332  423245 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:52:41.257443  423245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:52:41.285549  423245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33220 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:52:41.410905  423245 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:52:41.416185  423245 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:52:41.416220  423245 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:52:41.416232  423245 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 09:52:41.416294  423245 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 09:52:41.416419  423245 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> 1341452.pem in /etc/ssl/certs
	I1025 09:52:41.416550  423245 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:52:41.428256  423245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:52:41.457586  423245 start.go:296] duration metric: took 200.321794ms for postStartSetup
	I1025 09:52:41.458411  423245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-880773
	I1025 09:52:41.483279  423245 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/config.json ...
	I1025 09:52:41.483649  423245 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:52:41.483719  423245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:52:41.508838  423245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33220 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:52:41.624307  423245 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:52:41.630594  423245 start.go:128] duration metric: took 7.788187185s to createHost
	I1025 09:52:41.630708  423245 start.go:83] releasing machines lock for "default-k8s-diff-port-880773", held for 7.788472382s
	I1025 09:52:41.630815  423245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-880773
	I1025 09:52:41.653646  423245 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:52:41.653718  423245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:52:41.653960  423245 ssh_runner.go:195] Run: cat /version.json
	I1025 09:52:41.654043  423245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:52:41.679562  423245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33220 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:52:41.688280  423245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33220 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:52:41.797065  423245 ssh_runner.go:195] Run: systemctl --version
	I1025 09:52:41.871409  423245 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:52:41.917603  423245 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:52:41.924283  423245 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:52:41.924387  423245 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:52:41.958295  423245 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:52:41.958333  423245 start.go:495] detecting cgroup driver to use...
	I1025 09:52:41.958380  423245 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:52:41.958436  423245 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:52:41.980606  423245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:52:41.996876  423245 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:52:41.996940  423245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:52:42.022050  423245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:52:42.046866  423245 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:52:42.165067  423245 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:52:42.308784  423245 docker.go:234] disabling docker service ...
	I1025 09:52:42.308883  423245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:52:42.339753  423245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:52:42.359651  423245 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:52:42.478092  423245 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:52:42.599771  423245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:52:42.615687  423245 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:52:42.634296  423245 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:52:42.634403  423245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:42.762581  423245 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:52:42.762687  423245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:42.888661  423245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:43.016298  423245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:43.147268  423245 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:52:43.157912  423245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:43.267246  423245 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:43.308113  423245 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:43.320679  423245 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:52:43.330863  423245 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:52:43.343315  423245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:52:43.473329  423245 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:52:44.336822  423245 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:52:44.336894  423245 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:52:44.341828  423245 start.go:563] Will wait 60s for crictl version
	I1025 09:52:44.341888  423245 ssh_runner.go:195] Run: which crictl
	I1025 09:52:44.345644  423245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:52:44.376714  423245 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:52:44.376797  423245 ssh_runner.go:195] Run: crio --version
	I1025 09:52:44.415534  423245 ssh_runner.go:195] Run: crio --version
	I1025 09:52:44.456593  423245 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:52:42.101021  417881 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.252418547s)
	I1025 09:52:42.101055  417881 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1025 09:52:42.101082  417881 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1025 09:52:42.101544  417881 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.933635596s)
	I1025 09:52:42.101611  417881 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1025 09:52:42.101669  417881 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:52:42.101730  417881 ssh_runner.go:195] Run: which crictl
	I1025 09:52:42.101917  417881 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1025 09:52:42.108442  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:52:45.273895  417881 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (3.171923513s)
	I1025 09:52:45.273933  417881 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1025 09:52:45.273967  417881 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.16548663s)
	I1025 09:52:45.273974  417881 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1025 09:52:45.274035  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:52:45.274040  417881 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1025 09:52:45.305230  417881 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:52:45.325443  416663 kubeadm.go:318] [apiclient] All control plane components are healthy after 6.002636 seconds
	I1025 09:52:45.325704  416663 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:52:45.343495  416663 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:52:45.869041  416663 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:52:45.869330  416663 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-676314 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:52:46.386092  416663 kubeadm.go:318] [bootstrap-token] Using token: h3z5da.37quq26dfd7pj5kl
	I1025 09:52:46.387957  416663 out.go:252]   - Configuring RBAC rules ...
	I1025 09:52:46.388135  416663 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:52:46.394587  416663 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:52:46.405090  416663 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:52:46.410082  416663 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:52:46.415005  416663 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:52:46.419150  416663 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:52:46.437780  416663 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:52:46.658148  416663 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:52:46.800703  416663 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:52:46.801857  416663 kubeadm.go:318] 
	I1025 09:52:46.801949  416663 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:52:46.801961  416663 kubeadm.go:318] 
	I1025 09:52:46.802057  416663 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:52:46.802068  416663 kubeadm.go:318] 
	I1025 09:52:46.802097  416663 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:52:46.802168  416663 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:52:46.802239  416663 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:52:46.802256  416663 kubeadm.go:318] 
	I1025 09:52:46.802326  416663 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:52:46.802337  416663 kubeadm.go:318] 
	I1025 09:52:46.802416  416663 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:52:46.802426  416663 kubeadm.go:318] 
	I1025 09:52:46.802498  416663 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:52:46.802603  416663 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:52:46.802694  416663 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:52:46.802717  416663 kubeadm.go:318] 
	I1025 09:52:46.802891  416663 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:52:46.803015  416663 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:52:46.803023  416663 kubeadm.go:318] 
	I1025 09:52:46.803153  416663 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token h3z5da.37quq26dfd7pj5kl \
	I1025 09:52:46.803294  416663 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:6e42eae48b755d443fba2bbd8cd2499bc8de14d7e81dc26af35578c948bc74ab \
	I1025 09:52:46.803329  416663 kubeadm.go:318] 	--control-plane 
	I1025 09:52:46.803338  416663 kubeadm.go:318] 
	I1025 09:52:46.803446  416663 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:52:46.803456  416663 kubeadm.go:318] 
	I1025 09:52:46.803551  416663 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token h3z5da.37quq26dfd7pj5kl \
	I1025 09:52:46.803673  416663 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:6e42eae48b755d443fba2bbd8cd2499bc8de14d7e81dc26af35578c948bc74ab 
	I1025 09:52:46.805802  416663 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 09:52:46.805965  416663 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:52:46.805996  416663 cni.go:84] Creating CNI manager for ""
	I1025 09:52:46.806005  416663 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:52:46.807543  416663 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:52:44.457939  423245 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-880773 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:52:44.485221  423245 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1025 09:52:44.490389  423245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:52:44.509478  423245 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-880773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:52:44.509636  423245 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:52:44.509712  423245 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:52:44.553811  423245 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:52:44.553842  423245 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:52:44.553909  423245 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:52:44.586478  423245 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:52:44.586505  423245 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:52:44.586516  423245 kubeadm.go:934] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1025 09:52:44.586622  423245 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-880773 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:52:44.586734  423245 ssh_runner.go:195] Run: crio config
	I1025 09:52:44.654703  423245 cni.go:84] Creating CNI manager for ""
	I1025 09:52:44.654734  423245 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:52:44.654751  423245 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:52:44.654771  423245 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-880773 NodeName:default-k8s-diff-port-880773 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:52:44.654914  423245 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-880773"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:52:44.654972  423245 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:52:44.666665  423245 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:52:44.666737  423245 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:52:44.677166  423245 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1025 09:52:44.697602  423245 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:52:44.719905  423245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1025 09:52:44.740341  423245 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:52:44.746172  423245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:52:44.760802  423245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:52:44.884767  423245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:52:44.914022  423245 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773 for IP: 192.168.94.2
	I1025 09:52:44.914107  423245 certs.go:195] generating shared ca certs ...
	I1025 09:52:44.914144  423245 certs.go:227] acquiring lock for ca certs: {Name:mk84f00dc0ba6e3a6eb84ff47b0ea60692217fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:44.914292  423245 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key
	I1025 09:52:44.914339  423245 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key
	I1025 09:52:44.914377  423245 certs.go:257] generating profile certs ...
	I1025 09:52:44.914444  423245 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/client.key
	I1025 09:52:44.914458  423245 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/client.crt with IP's: []
	I1025 09:52:45.181885  423245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/client.crt ...
	I1025 09:52:45.181921  423245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/client.crt: {Name:mk7c57b62b4606b47f6553d82c66f6f5af173ab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:45.182089  423245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/client.key ...
	I1025 09:52:45.182107  423245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/client.key: {Name:mk6a8e534f888796cb83725a464516e01149b6b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:45.182221  423245 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.key.bf049977
	I1025 09:52:45.182241  423245 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.crt.bf049977 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1025 09:52:45.261391  423245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.crt.bf049977 ...
	I1025 09:52:45.261428  423245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.crt.bf049977: {Name:mk1b3bd13d06ec1aa1b05079a160e5d8abcacc17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:45.261620  423245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.key.bf049977 ...
	I1025 09:52:45.261638  423245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.key.bf049977: {Name:mkcfa511ca838a6e0e6570ca5cb655967747ae68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:45.261742  423245 certs.go:382] copying /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.crt.bf049977 -> /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.crt
	I1025 09:52:45.261849  423245 certs.go:386] copying /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.key.bf049977 -> /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.key
	I1025 09:52:45.261941  423245 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/proxy-client.key
	I1025 09:52:45.261967  423245 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/proxy-client.crt with IP's: []
	I1025 09:52:46.223946  423245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/proxy-client.crt ...
	I1025 09:52:46.223993  423245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/proxy-client.crt: {Name:mk223f4da91cb2c0f825a9715b1e4af7c2221d77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:46.224188  423245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/proxy-client.key ...
	I1025 09:52:46.224211  423245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/proxy-client.key: {Name:mk0e14d36416742972914d8bf7dcd8bbd792f041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:46.224445  423245 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem (1338 bytes)
	W1025 09:52:46.224514  423245 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145_empty.pem, impossibly tiny 0 bytes
	I1025 09:52:46.224530  423245 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:52:46.224574  423245 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:52:46.224607  423245 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:52:46.224636  423245 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem (1675 bytes)
	I1025 09:52:46.224690  423245 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:52:46.226065  423245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:52:46.246395  423245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:52:46.265908  423245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:52:46.285897  423245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:52:46.304549  423245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 09:52:46.325327  423245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:52:46.349809  423245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:52:46.368761  423245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:52:46.392764  423245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:52:46.420312  423245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem --> /usr/share/ca-certificates/134145.pem (1338 bytes)
	I1025 09:52:46.447624  423245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /usr/share/ca-certificates/1341452.pem (1708 bytes)
	I1025 09:52:46.470159  423245 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:52:46.493776  423245 ssh_runner.go:195] Run: openssl version
	I1025 09:52:46.502024  423245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:52:46.512913  423245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:52:46.517259  423245 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:59 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:52:46.517327  423245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:52:46.566459  423245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:52:46.576323  423245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134145.pem && ln -fs /usr/share/ca-certificates/134145.pem /etc/ssl/certs/134145.pem"
	I1025 09:52:46.586072  423245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134145.pem
	I1025 09:52:46.590783  423245 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:05 /usr/share/ca-certificates/134145.pem
	I1025 09:52:46.590840  423245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134145.pem
	I1025 09:52:46.642186  423245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134145.pem /etc/ssl/certs/51391683.0"
	I1025 09:52:46.652540  423245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1341452.pem && ln -fs /usr/share/ca-certificates/1341452.pem /etc/ssl/certs/1341452.pem"
	I1025 09:52:46.663474  423245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1341452.pem
	I1025 09:52:46.668633  423245 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:05 /usr/share/ca-certificates/1341452.pem
	I1025 09:52:46.668704  423245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1341452.pem
	I1025 09:52:46.729781  423245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1341452.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:52:46.743206  423245 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:52:46.750395  423245 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:52:46.750464  423245 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-880773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:52:46.750556  423245 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:52:46.750616  423245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:52:46.784828  423245 cri.go:89] found id: ""
	I1025 09:52:46.784905  423245 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:52:46.793556  423245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:52:46.802951  423245 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:52:46.803012  423245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:52:46.813286  423245 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:52:46.813314  423245 kubeadm.go:157] found existing configuration files:
	
	I1025 09:52:46.813379  423245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1025 09:52:46.823806  423245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:52:46.823884  423245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:52:46.832809  423245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1025 09:52:46.840594  423245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:52:46.840654  423245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:52:46.850237  423245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1025 09:52:46.860543  423245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:52:46.860610  423245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:52:46.869003  423245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1025 09:52:46.878031  423245 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:52:46.878086  423245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:52:46.886512  423245 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:52:46.945059  423245 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:52:46.945131  423245 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:52:46.969331  423245 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:52:46.969424  423245 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 09:52:46.969517  423245 kubeadm.go:318] OS: Linux
	I1025 09:52:46.969629  423245 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:52:46.969713  423245 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:52:46.969798  423245 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:52:46.969910  423245 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:52:46.969995  423245 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:52:46.970064  423245 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:52:46.970144  423245 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:52:46.970215  423245 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 09:52:47.033055  423245 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:52:47.033192  423245 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:52:47.033318  423245 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:52:47.042817  423245 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:52:44.232266  425060 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-042675:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.73949168s)
	I1025 09:52:44.232301  425060 kic.go:203] duration metric: took 4.739645148s to extract preloaded images to volume ...
	W1025 09:52:44.232427  425060 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:52:44.232472  425060 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:52:44.232519  425060 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:52:44.305310  425060 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-042675 --name newest-cni-042675 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-042675 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-042675 --network newest-cni-042675 --ip 192.168.103.2 --volume newest-cni-042675:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:52:44.660113  425060 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Running}}
	I1025 09:52:44.685253  425060 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:52:44.712219  425060 cli_runner.go:164] Run: docker exec newest-cni-042675 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:52:44.771767  425060 oci.go:144] the created container "newest-cni-042675" has a running status.
	I1025 09:52:44.771811  425060 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa...
	I1025 09:52:44.958904  425060 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:52:44.995099  425060 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:52:45.020492  425060 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:52:45.020522  425060 kic_runner.go:114] Args: [docker exec --privileged newest-cni-042675 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:52:45.100742  425060 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:52:45.120274  425060 machine.go:93] provisionDockerMachine start ...
	I1025 09:52:45.120486  425060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:52:45.140490  425060 main.go:141] libmachine: Using SSH client type: native
	I1025 09:52:45.156612  425060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33225 <nil> <nil>}
	I1025 09:52:45.156643  425060 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:52:45.336861  425060 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-042675
	
	I1025 09:52:45.336891  425060 ubuntu.go:182] provisioning hostname "newest-cni-042675"
	I1025 09:52:45.336969  425060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:52:45.363232  425060 main.go:141] libmachine: Using SSH client type: native
	I1025 09:52:45.363747  425060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33225 <nil> <nil>}
	I1025 09:52:45.363770  425060 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-042675 && echo "newest-cni-042675" | sudo tee /etc/hostname
	I1025 09:52:45.529803  425060 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-042675
	
	I1025 09:52:45.529889  425060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:52:45.551600  425060 main.go:141] libmachine: Using SSH client type: native
	I1025 09:52:45.551898  425060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33225 <nil> <nil>}
	I1025 09:52:45.551932  425060 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-042675' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-042675/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-042675' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:52:45.697031  425060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:52:45.697063  425060 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 09:52:45.697086  425060 ubuntu.go:190] setting up certificates
	I1025 09:52:45.697098  425060 provision.go:84] configureAuth start
	I1025 09:52:45.697166  425060 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042675
	I1025 09:52:45.716109  425060 provision.go:143] copyHostCerts
	I1025 09:52:45.716185  425060 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem, removing ...
	I1025 09:52:45.716199  425060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:52:45.716279  425060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 09:52:45.716420  425060 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem, removing ...
	I1025 09:52:45.716432  425060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:52:45.716470  425060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 09:52:45.716540  425060 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem, removing ...
	I1025 09:52:45.716549  425060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:52:45.716581  425060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 09:52:45.716642  425060 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.newest-cni-042675 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-042675]
	I1025 09:52:46.193339  425060 provision.go:177] copyRemoteCerts
	I1025 09:52:46.193434  425060 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:52:46.193487  425060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:52:46.214581  425060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33225 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:52:46.320670  425060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:52:46.345276  425060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 09:52:46.365271  425060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:52:46.388841  425060 provision.go:87] duration metric: took 691.730305ms to configureAuth
	I1025 09:52:46.388869  425060 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:52:46.389093  425060 config.go:182] Loaded profile config "newest-cni-042675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:52:46.389220  425060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:52:46.414841  425060 main.go:141] libmachine: Using SSH client type: native
	I1025 09:52:46.415127  425060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33225 <nil> <nil>}
	I1025 09:52:46.415153  425060 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:52:46.736646  425060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:52:46.736676  425060 machine.go:96] duration metric: took 1.616375798s to provisionDockerMachine
	I1025 09:52:46.736688  425060 client.go:171] duration metric: took 8.102808065s to LocalClient.Create
	I1025 09:52:46.736707  425060 start.go:167] duration metric: took 8.102879882s to libmachine.API.Create "newest-cni-042675"
	I1025 09:52:46.736722  425060 start.go:293] postStartSetup for "newest-cni-042675" (driver="docker")
	I1025 09:52:46.736735  425060 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:52:46.736811  425060 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:52:46.736852  425060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:52:46.761762  425060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33225 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:52:46.871774  425060 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:52:46.876023  425060 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:52:46.876056  425060 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:52:46.876067  425060 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 09:52:46.876118  425060 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 09:52:46.876211  425060 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> 1341452.pem in /etc/ssl/certs
	I1025 09:52:46.876328  425060 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:52:46.885008  425060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:52:46.909364  425060 start.go:296] duration metric: took 172.614102ms for postStartSetup
	I1025 09:52:46.909816  425060 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042675
	I1025 09:52:46.938541  425060 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/config.json ...
	I1025 09:52:46.938918  425060 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:52:46.938968  425060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:52:46.961419  425060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33225 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:52:47.063255  425060 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:52:47.068409  425060 start.go:128] duration metric: took 8.436758574s to createHost
	I1025 09:52:47.068436  425060 start.go:83] releasing machines lock for "newest-cni-042675", held for 8.436936075s
	I1025 09:52:47.068506  425060 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042675
	I1025 09:52:47.090557  425060 ssh_runner.go:195] Run: cat /version.json
	I1025 09:52:47.090602  425060 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:52:47.090618  425060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:52:47.090668  425060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:52:47.117699  425060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33225 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:52:47.119756  425060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33225 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:52:47.227362  425060 ssh_runner.go:195] Run: systemctl --version
	I1025 09:52:47.304202  425060 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:52:47.354761  425060 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:52:47.360921  425060 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:52:47.360998  425060 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:52:47.396863  425060 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:52:47.396893  425060 start.go:495] detecting cgroup driver to use...
	I1025 09:52:47.396925  425060 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:52:47.396976  425060 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:52:47.422190  425060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:52:47.439889  425060 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:52:47.439949  425060 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:52:47.463591  425060 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:52:47.489511  425060 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:52:47.609939  425060 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:52:47.749714  425060 docker.go:234] disabling docker service ...
	I1025 09:52:47.749787  425060 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:52:47.778936  425060 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:52:47.797642  425060 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:52:47.930956  425060 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:52:48.047205  425060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:52:48.060086  425060 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:52:48.076143  425060 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:52:48.076203  425060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:48.086484  425060 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:52:48.086542  425060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:48.095460  425060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:48.104083  425060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:48.112694  425060 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:52:48.120955  425060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:48.129710  425060 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:48.143257  425060 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:52:48.151757  425060 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:52:48.159084  425060 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:52:48.166998  425060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:52:47.045047  423245 out.go:252]   - Generating certificates and keys ...
	I1025 09:52:47.045167  423245 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:52:47.045265  423245 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:52:47.773226  423245 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:52:47.796725  423245 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:52:48.128564  423245 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:52:48.250860  425060 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:52:48.741849  425060 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:52:48.741930  425060 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:52:48.747091  425060 start.go:563] Will wait 60s for crictl version
	I1025 09:52:48.747152  425060 ssh_runner.go:195] Run: which crictl
	I1025 09:52:48.751235  425060 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:52:48.781404  425060 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:52:48.781491  425060 ssh_runner.go:195] Run: crio --version
	I1025 09:52:48.814969  425060 ssh_runner.go:195] Run: crio --version
	I1025 09:52:48.846714  425060 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:52:48.847766  425060 cli_runner.go:164] Run: docker network inspect newest-cni-042675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:52:48.866706  425060 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 09:52:48.871389  425060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:52:48.884424  425060 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1025 09:52:46.808694  416663 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:52:46.815238  416663 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1025 09:52:46.815259  416663 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:52:46.831926  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:52:47.759772  416663 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:52:47.759971  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:47.760069  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-676314 minikube.k8s.io/updated_at=2025_10_25T09_52_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53 minikube.k8s.io/name=old-k8s-version-676314 minikube.k8s.io/primary=true
	I1025 09:52:47.772948  416663 ops.go:34] apiserver oom_adj: -16
	I1025 09:52:47.886318  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:48.387297  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:48.886826  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:49.387138  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:48.885424  425060 kubeadm.go:883] updating cluster {Name:newest-cni-042675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-042675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:52:48.885532  425060 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:52:48.885589  425060 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:52:48.925171  425060 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:52:48.925198  425060 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:52:48.925259  425060 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:52:48.954643  425060 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:52:48.954670  425060 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:52:48.954679  425060 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1025 09:52:48.954780  425060 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-042675 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-042675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:52:48.954867  425060 ssh_runner.go:195] Run: crio config
	I1025 09:52:49.013413  425060 cni.go:84] Creating CNI manager for ""
	I1025 09:52:49.013442  425060 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:52:49.013467  425060 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1025 09:52:49.013498  425060 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-042675 NodeName:newest-cni-042675 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:52:49.013713  425060 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-042675"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:52:49.013804  425060 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:52:49.022274  425060 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:52:49.022367  425060 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:52:49.031015  425060 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 09:52:49.045579  425060 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:52:49.062153  425060 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 09:52:49.075714  425060 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:52:49.079787  425060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:52:49.090886  425060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:52:49.180675  425060 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:52:49.217361  425060 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675 for IP: 192.168.103.2
	I1025 09:52:49.217386  425060 certs.go:195] generating shared ca certs ...
	I1025 09:52:49.217408  425060 certs.go:227] acquiring lock for ca certs: {Name:mk84f00dc0ba6e3a6eb84ff47b0ea60692217fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:49.217566  425060 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key
	I1025 09:52:49.217657  425060 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key
	I1025 09:52:49.217678  425060 certs.go:257] generating profile certs ...
	I1025 09:52:49.217750  425060 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/client.key
	I1025 09:52:49.217767  425060 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/client.crt with IP's: []
	I1025 09:52:49.373049  425060 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/client.crt ...
	I1025 09:52:49.373077  425060 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/client.crt: {Name:mk5e288a6c36ae07d2ca232a768c54eb2a7138bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:49.373282  425060 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/client.key ...
	I1025 09:52:49.373300  425060 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/client.key: {Name:mk724bb16aec3727ef010ec3b6e55a105992d9fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:49.373477  425060 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/apiserver.key.c1b0a430
	I1025 09:52:49.373534  425060 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/apiserver.crt.c1b0a430 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1025 09:52:49.453733  425060 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/apiserver.crt.c1b0a430 ...
	I1025 09:52:49.453769  425060 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/apiserver.crt.c1b0a430: {Name:mkc3d37945bc73f7191ac7e85b9d9fb9243b911e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:49.453959  425060 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/apiserver.key.c1b0a430 ...
	I1025 09:52:49.453978  425060 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/apiserver.key.c1b0a430: {Name:mk26a50de93980e68c01146c46956959fc2bac16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:49.454075  425060 certs.go:382] copying /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/apiserver.crt.c1b0a430 -> /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/apiserver.crt
	I1025 09:52:49.454186  425060 certs.go:386] copying /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/apiserver.key.c1b0a430 -> /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/apiserver.key
	I1025 09:52:49.454266  425060 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/proxy-client.key
	I1025 09:52:49.454291  425060 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/proxy-client.crt with IP's: []
	I1025 09:52:49.523036  425060 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/proxy-client.crt ...
	I1025 09:52:49.523064  425060 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/proxy-client.crt: {Name:mka06e47588dfa337299b70d60b9e2903fabb7e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:49.523248  425060 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/proxy-client.key ...
	I1025 09:52:49.523265  425060 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/proxy-client.key: {Name:mk2adafefde9873937b6f73a039a158af7d14ac2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:49.523488  425060 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem (1338 bytes)
	W1025 09:52:49.523524  425060 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145_empty.pem, impossibly tiny 0 bytes
	I1025 09:52:49.523554  425060 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:52:49.523589  425060 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:52:49.523612  425060 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:52:49.523634  425060 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem (1675 bytes)
	I1025 09:52:49.523671  425060 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:52:49.524236  425060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:52:49.544442  425060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:52:49.562728  425060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:52:49.580713  425060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:52:49.599035  425060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 09:52:49.617898  425060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:52:49.636089  425060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:52:49.654715  425060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:52:49.673531  425060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /usr/share/ca-certificates/1341452.pem (1708 bytes)
	I1025 09:52:49.693265  425060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:52:49.711641  425060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem --> /usr/share/ca-certificates/134145.pem (1338 bytes)
	I1025 09:52:49.729569  425060 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:52:49.742656  425060 ssh_runner.go:195] Run: openssl version
	I1025 09:52:49.748822  425060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1341452.pem && ln -fs /usr/share/ca-certificates/1341452.pem /etc/ssl/certs/1341452.pem"
	I1025 09:52:49.757441  425060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1341452.pem
	I1025 09:52:49.761681  425060 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:05 /usr/share/ca-certificates/1341452.pem
	I1025 09:52:49.761741  425060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1341452.pem
	I1025 09:52:49.796159  425060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1341452.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:52:49.805906  425060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:52:49.814687  425060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:52:49.818680  425060 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:59 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:52:49.818734  425060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:52:49.852956  425060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:52:49.862404  425060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134145.pem && ln -fs /usr/share/ca-certificates/134145.pem /etc/ssl/certs/134145.pem"
	I1025 09:52:49.871221  425060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134145.pem
	I1025 09:52:49.875161  425060 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:05 /usr/share/ca-certificates/134145.pem
	I1025 09:52:49.875214  425060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134145.pem
	I1025 09:52:49.926137  425060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134145.pem /etc/ssl/certs/51391683.0"
	I1025 09:52:49.936744  425060 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:52:49.941081  425060 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:52:49.941136  425060 kubeadm.go:400] StartCluster: {Name:newest-cni-042675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-042675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:52:49.941224  425060 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:52:49.941299  425060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:52:49.972424  425060 cri.go:89] found id: ""
	I1025 09:52:49.972506  425060 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:52:49.981572  425060 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:52:49.990263  425060 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:52:49.990333  425060 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:52:49.998370  425060 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:52:49.998389  425060 kubeadm.go:157] found existing configuration files:
	
	I1025 09:52:49.998437  425060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:52:50.006724  425060 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:52:50.006785  425060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:52:50.014529  425060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:52:50.022406  425060 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:52:50.022479  425060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:52:50.029854  425060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:52:50.037772  425060 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:52:50.037814  425060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:52:50.045152  425060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:52:50.052886  425060 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:52:50.052942  425060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:52:50.060661  425060 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:52:50.104409  425060 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:52:50.104463  425060 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:52:50.130252  425060 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:52:50.130360  425060 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 09:52:50.130434  425060 kubeadm.go:318] OS: Linux
	I1025 09:52:50.130523  425060 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:52:50.130599  425060 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:52:50.130679  425060 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:52:50.130750  425060 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:52:50.130836  425060 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:52:50.130904  425060 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:52:50.130973  425060 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:52:50.131035  425060 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 09:52:50.205775  425060 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:52:50.205936  425060 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:52:50.206094  425060 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:52:50.213673  425060 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:52:46.601643  417881 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.327567077s)
	I1025 09:52:46.601676  417881 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1025 09:52:46.601705  417881 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1025 09:52:46.601703  417881 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.29643792s)
	I1025 09:52:46.601753  417881 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1025 09:52:46.601754  417881 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1025 09:52:46.601847  417881 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1025 09:52:47.950989  417881 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.34917023s)
	I1025 09:52:47.951026  417881 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1025 09:52:47.951051  417881 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1025 09:52:47.951057  417881 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.349184375s)
	I1025 09:52:47.951097  417881 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1025 09:52:47.951140  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1025 09:52:47.951102  417881 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1025 09:52:49.496822  417881 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.545589721s)
	I1025 09:52:49.496857  417881 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1025 09:52:49.496905  417881 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1025 09:52:49.496967  417881 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1025 09:52:50.216370  425060 out.go:252]   - Generating certificates and keys ...
	I1025 09:52:50.216485  425060 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:52:50.216588  425060 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:52:50.979414  425060 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:52:51.441913  425060 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:52:51.783821  425060 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:52:52.107301  425060 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:52:52.306321  425060 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:52:52.306554  425060 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-042675] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1025 09:52:52.523038  425060 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:52:52.523258  425060 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-042675] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1025 09:52:53.094460  425060 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:52:48.922131  423245 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:52:49.043637  423245 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:52:49.043895  423245 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-880773 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1025 09:52:49.564794  423245 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:52:49.565021  423245 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-880773 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1025 09:52:49.923236  423245 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:52:50.126180  423245 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:52:50.492656  423245 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:52:50.492781  423245 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:52:51.226212  423245 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:52:51.399982  423245 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:52:52.681952  423245 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:52:53.008417  423245 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:52:53.277849  423245 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:52:53.349782  423245 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:52:53.384973  423245 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:52:53.387454  423245 out.go:252]   - Booting up control plane ...
	I1025 09:52:53.387599  423245 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:52:53.387747  423245 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:52:53.388019  423245 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:52:53.403444  423245 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:52:53.403610  423245 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:52:53.410869  423245 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:52:53.411009  423245 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:52:53.411073  423245 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:52:53.537318  423245 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:52:53.537528  423245 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:52:49.886495  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:50.386405  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:50.887055  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:51.386500  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:51.886476  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:52.386494  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:52.886682  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:53.386408  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:53.887025  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:54.386480  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:53.537210  425060 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:52:53.619513  425060 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:52:53.619619  425060 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:52:53.793266  425060 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:52:53.983034  425060 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:52:54.422076  425060 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:52:54.981914  425060 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:52:55.587848  425060 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:52:55.588618  425060 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:52:55.594089  425060 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:52:53.452057  417881 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.955059588s)
	I1025 09:52:53.452098  417881 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1025 09:52:53.452124  417881 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1025 09:52:53.452178  417881 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1025 09:52:54.057526  417881 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1025 09:52:54.057578  417881 cache_images.go:124] Successfully loaded all cached images
	I1025 09:52:54.057585  417881 cache_images.go:93] duration metric: took 15.328868445s to LoadCachedImages
	I1025 09:52:54.057603  417881 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 09:52:54.057722  417881 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-656799 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-656799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:52:54.057798  417881 ssh_runner.go:195] Run: crio config
	I1025 09:52:54.108684  417881 cni.go:84] Creating CNI manager for ""
	I1025 09:52:54.108706  417881 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:52:54.108721  417881 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:52:54.108746  417881 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-656799 NodeName:no-preload-656799 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:52:54.108893  417881 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-656799"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:52:54.108974  417881 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:52:54.118526  417881 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1025 09:52:54.118591  417881 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1025 09:52:54.127945  417881 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1025 09:52:54.127983  417881 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21794-130604/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1025 09:52:54.128029  417881 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1025 09:52:54.127984  417881 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21794-130604/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1025 09:52:54.133177  417881 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1025 09:52:54.133210  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1025 09:52:55.077392  417881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:52:55.095781  417881 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1025 09:52:55.101624  417881 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1025 09:52:55.101658  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1025 09:52:55.355992  417881 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1025 09:52:55.361094  417881 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1025 09:52:55.361131  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1025 09:52:55.576215  417881 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:52:55.586532  417881 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 09:52:55.604963  417881 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:52:55.624020  417881 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1025 09:52:55.644994  417881 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:52:55.649965  417881 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:52:55.664181  417881 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:52:55.781921  417881 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:52:55.807505  417881 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799 for IP: 192.168.76.2
	I1025 09:52:55.807532  417881 certs.go:195] generating shared ca certs ...
	I1025 09:52:55.807552  417881 certs.go:227] acquiring lock for ca certs: {Name:mk84f00dc0ba6e3a6eb84ff47b0ea60692217fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:55.807720  417881 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key
	I1025 09:52:55.807771  417881 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key
	I1025 09:52:55.807787  417881 certs.go:257] generating profile certs ...
	I1025 09:52:55.807857  417881 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/client.key
	I1025 09:52:55.807874  417881 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/client.crt with IP's: []
	I1025 09:52:56.396919  417881 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/client.crt ...
	I1025 09:52:56.396948  417881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/client.crt: {Name:mkb775dffdeac68e9414ae627760f09e09d169a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:56.397134  417881 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/client.key ...
	I1025 09:52:56.397149  417881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/client.key: {Name:mk5450f07271d75c938a37f82f320c86d130e4c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:56.397265  417881 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/apiserver.key.865cdb63
	I1025 09:52:56.397281  417881 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/apiserver.crt.865cdb63 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1025 09:52:56.791266  417881 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/apiserver.crt.865cdb63 ...
	I1025 09:52:56.791302  417881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/apiserver.crt.865cdb63: {Name:mk62a582ab6e36da03d214364ae18b8fc9460c9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:56.791510  417881 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/apiserver.key.865cdb63 ...
	I1025 09:52:56.791526  417881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/apiserver.key.865cdb63: {Name:mk704beb7332ab0cd7a3c3198b79c8da88161cf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:56.791655  417881 certs.go:382] copying /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/apiserver.crt.865cdb63 -> /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/apiserver.crt
	I1025 09:52:56.791751  417881 certs.go:386] copying /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/apiserver.key.865cdb63 -> /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/apiserver.key
	I1025 09:52:56.791840  417881 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/proxy-client.key
	I1025 09:52:56.791862  417881 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/proxy-client.crt with IP's: []
	I1025 09:52:57.011200  417881 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/proxy-client.crt ...
	I1025 09:52:57.011234  417881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/proxy-client.crt: {Name:mkcd8ea98285392ee0b0c637062abb0a63a1f065 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:57.011475  417881 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/proxy-client.key ...
	I1025 09:52:57.011503  417881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/proxy-client.key: {Name:mk69a981fa1ca5a038589f00037e7778991f10fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:57.011841  417881 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem (1338 bytes)
	W1025 09:52:57.011899  417881 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145_empty.pem, impossibly tiny 0 bytes
	I1025 09:52:57.011915  417881 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:52:57.011954  417881 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:52:57.011988  417881 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:52:57.012024  417881 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem (1675 bytes)
	I1025 09:52:57.012093  417881 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:52:57.012922  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:52:57.138456  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:52:57.227569  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:52:57.256064  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:52:57.281404  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 09:52:57.305843  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:52:57.340912  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:52:57.366489  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 09:52:57.393133  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:52:57.423455  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem --> /usr/share/ca-certificates/134145.pem (1338 bytes)
	I1025 09:52:57.450193  417881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /usr/share/ca-certificates/1341452.pem (1708 bytes)
	I1025 09:52:57.471949  417881 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:52:57.489060  417881 ssh_runner.go:195] Run: openssl version
	I1025 09:52:57.496465  417881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:52:57.506278  417881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:52:57.510807  417881 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:59 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:52:57.510878  417881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:52:57.548186  417881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:52:57.558010  417881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134145.pem && ln -fs /usr/share/ca-certificates/134145.pem /etc/ssl/certs/134145.pem"
	I1025 09:52:57.567880  417881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134145.pem
	I1025 09:52:57.573278  417881 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:05 /usr/share/ca-certificates/134145.pem
	I1025 09:52:57.573335  417881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134145.pem
	I1025 09:52:57.615341  417881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134145.pem /etc/ssl/certs/51391683.0"
	I1025 09:52:57.626007  417881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1341452.pem && ln -fs /usr/share/ca-certificates/1341452.pem /etc/ssl/certs/1341452.pem"
	I1025 09:52:57.637920  417881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1341452.pem
	I1025 09:52:57.643284  417881 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:05 /usr/share/ca-certificates/1341452.pem
	I1025 09:52:57.643376  417881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1341452.pem
	I1025 09:52:57.700978  417881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1341452.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:52:57.714374  417881 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:52:57.720192  417881 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:52:57.720285  417881 kubeadm.go:400] StartCluster: {Name:no-preload-656799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-656799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:52:57.720411  417881 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:52:57.720470  417881 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:52:57.759980  417881 cri.go:89] found id: ""
	I1025 09:52:57.760062  417881 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:52:57.771504  417881 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:52:57.783979  417881 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:52:57.784040  417881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:52:57.794791  417881 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:52:57.794813  417881 kubeadm.go:157] found existing configuration files:
	
	I1025 09:52:57.794863  417881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:52:57.805806  417881 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:52:57.805906  417881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:52:57.816017  417881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:52:57.827846  417881 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:52:57.827915  417881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:52:57.838634  417881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:52:57.850055  417881 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:52:57.850127  417881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:52:57.861183  417881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:52:57.872210  417881 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:52:57.872270  417881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:52:57.883594  417881 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:52:57.938954  417881 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:52:57.939065  417881 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:52:57.971760  417881 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:52:57.971871  417881 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 09:52:57.971916  417881 kubeadm.go:318] OS: Linux
	I1025 09:52:57.971977  417881 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:52:57.972034  417881 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:52:57.972098  417881 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:52:57.972157  417881 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:52:57.972215  417881 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:52:57.972280  417881 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:52:57.972341  417881 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:52:57.972413  417881 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 09:52:58.051197  417881 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:52:58.051373  417881 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:52:58.051504  417881 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:52:58.072416  417881 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:52:55.595560  425060 out.go:252]   - Booting up control plane ...
	I1025 09:52:55.595681  425060 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:52:55.597620  425060 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:52:55.598713  425060 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:52:55.617389  425060 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:52:55.617535  425060 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:52:55.628576  425060 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:52:55.629050  425060 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:52:55.629139  425060 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:52:55.772477  425060 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:52:55.772667  425060 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:52:56.273269  425060 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.993161ms
	I1025 09:52:56.276815  425060 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:52:56.277048  425060 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1025 09:52:56.277216  425060 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:52:56.277372  425060 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:52:54.538124  423245 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000969369s
	I1025 09:52:54.541000  423245 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:52:54.541123  423245 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1025 09:52:54.541231  423245 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:52:54.541301  423245 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:52:56.270616  423245 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.729534447s
	I1025 09:52:57.039145  423245 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.498162949s
	I1025 09:52:58.543178  423245 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.002105252s
	I1025 09:52:58.556296  423245 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:52:58.567095  423245 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:52:58.578544  423245 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:52:58.579861  423245 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-880773 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:52:58.596612  423245 kubeadm.go:318] [bootstrap-token] Using token: 3e2to9.9qk5lz9nbbbpeife
	I1025 09:52:58.598301  423245 out.go:252]   - Configuring RBAC rules ...
	I1025 09:52:58.598478  423245 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:52:58.602332  423245 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:52:58.609451  423245 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:52:58.612273  423245 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:52:54.886555  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:55.386726  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:55.886612  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:56.387283  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:56.886490  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:57.387173  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:57.887263  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:58.386486  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:58.887210  416663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:58.983925  416663 kubeadm.go:1113] duration metric: took 11.224012963s to wait for elevateKubeSystemPrivileges
	I1025 09:52:58.983966  416663 kubeadm.go:402] duration metric: took 23.404360293s to StartCluster
	I1025 09:52:58.983989  416663 settings.go:142] acquiring lock: {Name:mke1e64be0ec6edf2eef6e52eb10d83b59bb8c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:58.984068  416663 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:52:58.985061  416663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:52:58.985326  416663 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:52:58.985361  416663 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:52:58.985438  416663 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:52:58.985531  416663 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-676314"
	I1025 09:52:58.985550  416663 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-676314"
	I1025 09:52:58.985597  416663 host.go:66] Checking if "old-k8s-version-676314" exists ...
	I1025 09:52:58.985598  416663 config.go:182] Loaded profile config "old-k8s-version-676314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 09:52:58.985611  416663 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-676314"
	I1025 09:52:58.985638  416663 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-676314"
	I1025 09:52:58.986035  416663 cli_runner.go:164] Run: docker container inspect old-k8s-version-676314 --format={{.State.Status}}
	I1025 09:52:58.986406  416663 cli_runner.go:164] Run: docker container inspect old-k8s-version-676314 --format={{.State.Status}}
	I1025 09:52:58.987293  416663 out.go:179] * Verifying Kubernetes components...
	I1025 09:52:58.988636  416663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:52:59.025063  416663 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:52:59.026843  416663 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:52:59.026864  416663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:52:59.027028  416663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-676314
	I1025 09:52:59.029172  416663 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-676314"
	I1025 09:52:59.029642  416663 host.go:66] Checking if "old-k8s-version-676314" exists ...
	I1025 09:52:59.030180  416663 cli_runner.go:164] Run: docker container inspect old-k8s-version-676314 --format={{.State.Status}}
	I1025 09:52:59.068743  416663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33215 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/old-k8s-version-676314/id_rsa Username:docker}
	I1025 09:52:59.071549  416663 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:52:59.071990  416663 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:52:59.072098  416663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-676314
	I1025 09:52:59.101613  416663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33215 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/old-k8s-version-676314/id_rsa Username:docker}
	I1025 09:52:59.184429  416663 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:52:59.232830  416663 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:52:59.243031  416663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:52:59.281684  416663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:52:59.641966  416663 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1025 09:52:59.643087  416663 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-676314" to be "Ready" ...
	I1025 09:52:59.868655  416663 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 09:52:58.615325  423245 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:52:58.618216  423245 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:52:58.951738  423245 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:52:59.422415  423245 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:52:59.951648  423245 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:52:59.951678  423245 kubeadm.go:318] 
	I1025 09:52:59.951936  423245 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:52:59.951956  423245 kubeadm.go:318] 
	I1025 09:52:59.952088  423245 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:52:59.952113  423245 kubeadm.go:318] 
	I1025 09:52:59.952154  423245 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:52:59.952230  423245 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:52:59.952294  423245 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:52:59.952300  423245 kubeadm.go:318] 
	I1025 09:52:59.952382  423245 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:52:59.952389  423245 kubeadm.go:318] 
	I1025 09:52:59.952446  423245 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:52:59.952451  423245 kubeadm.go:318] 
	I1025 09:52:59.952516  423245 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:52:59.952611  423245 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:52:59.952705  423245 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:52:59.952712  423245 kubeadm.go:318] 
	I1025 09:52:59.952815  423245 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:52:59.952909  423245 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:52:59.952917  423245 kubeadm.go:318] 
	I1025 09:52:59.953042  423245 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token 3e2to9.9qk5lz9nbbbpeife \
	I1025 09:52:59.953176  423245 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:6e42eae48b755d443fba2bbd8cd2499bc8de14d7e81dc26af35578c948bc74ab \
	I1025 09:52:59.953202  423245 kubeadm.go:318] 	--control-plane 
	I1025 09:52:59.953207  423245 kubeadm.go:318] 
	I1025 09:52:59.953306  423245 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:52:59.953313  423245 kubeadm.go:318] 
	I1025 09:52:59.953421  423245 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token 3e2to9.9qk5lz9nbbbpeife \
	I1025 09:52:59.953547  423245 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:6e42eae48b755d443fba2bbd8cd2499bc8de14d7e81dc26af35578c948bc74ab 
	I1025 09:52:59.959577  423245 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 09:52:59.959729  423245 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:52:59.959762  423245 cni.go:84] Creating CNI manager for ""
	I1025 09:52:59.959777  423245 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:52:59.964097  423245 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:52:58.074904  417881 out.go:252]   - Generating certificates and keys ...
	I1025 09:52:58.075011  417881 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:52:58.075111  417881 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:52:58.211496  417881 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:52:59.558777  417881 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:52:59.634262  417881 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:52:59.946937  417881 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:53:00.023603  417881 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:53:00.023789  417881 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-656799] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 09:53:00.497213  417881 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:53:00.497964  417881 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-656799] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 09:53:00.581761  417881 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:53:01.037161  417881 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:52:59.045033  425060 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.766744234s
	I1025 09:52:59.973995  425060 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.697093436s
	I1025 09:53:01.779386  425060 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.502428629s
	I1025 09:53:01.792285  425060 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:53:01.804616  425060 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:53:01.813929  425060 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:53:01.814190  425060 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-042675 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:53:01.823955  425060 kubeadm.go:318] [bootstrap-token] Using token: kkryr0.3d9d3vgu8uq73bih
	I1025 09:53:01.357202  417881 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:53:01.357372  417881 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:53:01.482755  417881 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:53:02.043859  417881 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:53:02.290766  417881 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:53:02.492061  417881 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:53:02.710579  417881 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:53:02.711160  417881 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:53:02.714477  417881 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:53:01.825578  425060 out.go:252]   - Configuring RBAC rules ...
	I1025 09:53:01.825740  425060 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:53:01.830427  425060 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:53:01.837431  425060 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:53:01.840544  425060 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:53:01.845725  425060 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:53:01.849160  425060 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:53:02.185941  425060 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:53:02.603606  425060 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:53:03.186178  425060 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:53:03.187223  425060 kubeadm.go:318] 
	I1025 09:53:03.187317  425060 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:53:03.187328  425060 kubeadm.go:318] 
	I1025 09:53:03.187508  425060 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:53:03.187538  425060 kubeadm.go:318] 
	I1025 09:53:03.187580  425060 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:53:03.187666  425060 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:53:03.187745  425060 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:53:03.187759  425060 kubeadm.go:318] 
	I1025 09:53:03.187841  425060 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:53:03.187848  425060 kubeadm.go:318] 
	I1025 09:53:03.187891  425060 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:53:03.187897  425060 kubeadm.go:318] 
	I1025 09:53:03.187940  425060 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:53:03.188044  425060 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:53:03.188152  425060 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:53:03.188162  425060 kubeadm.go:318] 
	I1025 09:53:03.188290  425060 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:53:03.188431  425060 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:53:03.188442  425060 kubeadm.go:318] 
	I1025 09:53:03.188570  425060 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token kkryr0.3d9d3vgu8uq73bih \
	I1025 09:53:03.188679  425060 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:6e42eae48b755d443fba2bbd8cd2499bc8de14d7e81dc26af35578c948bc74ab \
	I1025 09:53:03.188700  425060 kubeadm.go:318] 	--control-plane 
	I1025 09:53:03.188704  425060 kubeadm.go:318] 
	I1025 09:53:03.188775  425060 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:53:03.188781  425060 kubeadm.go:318] 
	I1025 09:53:03.188911  425060 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token kkryr0.3d9d3vgu8uq73bih \
	I1025 09:53:03.189058  425060 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:6e42eae48b755d443fba2bbd8cd2499bc8de14d7e81dc26af35578c948bc74ab 
	I1025 09:53:03.192329  425060 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 09:53:03.192502  425060 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:53:03.192529  425060 cni.go:84] Creating CNI manager for ""
	I1025 09:53:03.192542  425060 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:53:03.193698  425060 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:52:59.965534  423245 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:52:59.972939  423245 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:52:59.973106  423245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:52:59.987502  423245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:53:00.236983  423245 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:53:00.237118  423245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:00.237179  423245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-880773 minikube.k8s.io/updated_at=2025_10_25T09_53_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53 minikube.k8s.io/name=default-k8s-diff-port-880773 minikube.k8s.io/primary=true
	I1025 09:53:00.311955  423245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:00.311956  423245 ops.go:34] apiserver oom_adj: -16
	I1025 09:53:00.812409  423245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:01.312014  423245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:01.812551  423245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:02.312185  423245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:02.812560  423245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:03.312512  423245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:52:59.869961  416663 addons.go:514] duration metric: took 884.516226ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 09:53:00.146460  416663 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-676314" context rescaled to 1 replicas
	W1025 09:53:01.646541  416663 node_ready.go:57] node "old-k8s-version-676314" has "Ready":"False" status (will retry)
	W1025 09:53:03.654380  416663 node_ready.go:57] node "old-k8s-version-676314" has "Ready":"False" status (will retry)
	I1025 09:53:03.812623  423245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:04.312116  423245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:04.812709  423245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:04.907381  423245 kubeadm.go:1113] duration metric: took 4.670259745s to wait for elevateKubeSystemPrivileges
	I1025 09:53:04.907425  423245 kubeadm.go:402] duration metric: took 18.15696545s to StartCluster
	I1025 09:53:04.907448  423245 settings.go:142] acquiring lock: {Name:mke1e64be0ec6edf2eef6e52eb10d83b59bb8c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:53:04.907527  423245 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:53:04.908863  423245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:53:04.909137  423245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:53:04.909141  423245 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:53:04.909322  423245 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:53:04.909437  423245 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-880773"
	I1025 09:53:04.909461  423245 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-880773"
	I1025 09:53:04.909495  423245 host.go:66] Checking if "default-k8s-diff-port-880773" exists ...
	I1025 09:53:04.909662  423245 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-880773"
	I1025 09:53:04.909690  423245 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-880773"
	I1025 09:53:04.909699  423245 config.go:182] Loaded profile config "default-k8s-diff-port-880773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:53:04.910024  423245 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:53:04.910031  423245 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:53:04.910640  423245 out.go:179] * Verifying Kubernetes components...
	I1025 09:53:04.913458  423245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:53:04.939487  423245 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-880773"
	I1025 09:53:04.939684  423245 host.go:66] Checking if "default-k8s-diff-port-880773" exists ...
	I1025 09:53:04.940235  423245 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:53:04.940305  423245 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:53:04.942472  423245 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:53:04.942495  423245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:53:04.942557  423245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:53:04.969414  423245 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:53:04.969496  423245 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:53:04.969569  423245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:53:04.973428  423245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33220 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:53:05.013500  423245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33220 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:53:05.061533  423245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:53:05.149685  423245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:53:05.151789  423245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:53:05.169246  423245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:53:05.308231  423245 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1025 09:53:05.547167  423245 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-880773" to be "Ready" ...
	I1025 09:53:05.556241  423245 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 09:53:02.715765  417881 out.go:252]   - Booting up control plane ...
	I1025 09:53:02.715866  417881 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:53:02.715978  417881 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:53:02.716796  417881 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:53:02.731661  417881 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:53:02.731818  417881 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:53:02.739575  417881 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:53:02.739803  417881 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:53:02.739895  417881 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:53:02.860173  417881 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:53:02.860399  417881 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:53:03.861667  417881 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001586769s
	I1025 09:53:03.865691  417881 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:53:03.865818  417881 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1025 09:53:03.865934  417881 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:53:03.866061  417881 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:53:03.194772  425060 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:53:03.199077  425060 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:53:03.199096  425060 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:53:03.212282  425060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:53:03.437792  425060 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:53:03.437993  425060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:03.438078  425060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-042675 minikube.k8s.io/updated_at=2025_10_25T09_53_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53 minikube.k8s.io/name=newest-cni-042675 minikube.k8s.io/primary=true
	I1025 09:53:03.447701  425060 ops.go:34] apiserver oom_adj: -16
	I1025 09:53:03.532703  425060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:04.033584  425060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:04.534997  425060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:05.032756  425060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:05.532831  425060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:06.033552  425060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:06.532872  425060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:07.032764  425060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:07.533022  425060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:08.032887  425060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:53:08.100278  425060 kubeadm.go:1113] duration metric: took 4.662325441s to wait for elevateKubeSystemPrivileges
	I1025 09:53:08.100306  425060 kubeadm.go:402] duration metric: took 18.159174224s to StartCluster
	I1025 09:53:08.100328  425060 settings.go:142] acquiring lock: {Name:mke1e64be0ec6edf2eef6e52eb10d83b59bb8c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:53:08.100435  425060 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:53:08.101660  425060 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:53:08.101880  425060 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:53:08.101889  425060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:53:08.101902  425060 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:53:08.102038  425060 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-042675"
	I1025 09:53:08.102048  425060 addons.go:69] Setting default-storageclass=true in profile "newest-cni-042675"
	I1025 09:53:08.102068  425060 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-042675"
	I1025 09:53:08.102072  425060 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-042675"
	I1025 09:53:08.102100  425060 host.go:66] Checking if "newest-cni-042675" exists ...
	I1025 09:53:08.102102  425060 config.go:182] Loaded profile config "newest-cni-042675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:53:08.102496  425060 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:53:08.102689  425060 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:53:08.103438  425060 out.go:179] * Verifying Kubernetes components...
	I1025 09:53:08.105292  425060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:53:08.129382  425060 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:53:08.129858  425060 addons.go:238] Setting addon default-storageclass=true in "newest-cni-042675"
	I1025 09:53:08.129905  425060 host.go:66] Checking if "newest-cni-042675" exists ...
	I1025 09:53:08.130391  425060 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:53:08.131984  425060 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:53:08.132013  425060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:53:08.132077  425060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:08.162241  425060 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:53:08.162265  425060 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:53:08.162322  425060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:08.166605  425060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33225 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:05.558088  423245 addons.go:514] duration metric: took 648.770038ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 09:53:05.812299  423245 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-880773" context rescaled to 1 replicas
	W1025 09:53:07.553015  423245 node_ready.go:57] node "default-k8s-diff-port-880773" has "Ready":"False" status (will retry)
	I1025 09:53:08.192976  425060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33225 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:08.206850  425060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:53:08.276465  425060 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:53:08.287845  425060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:53:08.317370  425060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:53:08.389169  425060 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1025 09:53:08.390320  425060 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:53:08.390404  425060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:53:08.656853  425060 api_server.go:72] duration metric: took 554.931183ms to wait for apiserver process to appear ...
	I1025 09:53:08.656882  425060 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:53:08.656904  425060 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:53:08.662694  425060 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1025 09:53:08.663546  425060 api_server.go:141] control plane version: v1.34.1
	I1025 09:53:08.663575  425060 api_server.go:131] duration metric: took 6.68359ms to wait for apiserver health ...
	I1025 09:53:08.663587  425060 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:53:08.664527  425060 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 09:53:08.665582  425060 addons.go:514] duration metric: took 563.672404ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 09:53:08.666644  425060 system_pods.go:59] 8 kube-system pods found
	I1025 09:53:08.666679  425060 system_pods.go:61] "coredns-66bc5c9577-v4xpv" [c6b5ed04-03a3-4b67-bd8b-3d0392236861] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 09:53:08.666688  425060 system_pods.go:61] "etcd-newest-cni-042675" [559f055a-4502-4e2e-a28e-096449f29d72] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:53:08.666697  425060 system_pods.go:61] "kindnet-xsn67" [6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3] Running
	I1025 09:53:08.666711  425060 system_pods.go:61] "kube-apiserver-newest-cni-042675" [0be15777-76f2-46e9-b9da-fe0f7a4426a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:53:08.666716  425060 system_pods.go:61] "kube-controller-manager-newest-cni-042675" [8ffd378a-c0d8-4135-a9be-b7532cb0f44c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:53:08.666724  425060 system_pods.go:61] "kube-proxy-468gg" [7360d3df-fd12-429c-b79f-f8a744d0de49] Running
	I1025 09:53:08.666729  425060 system_pods.go:61] "kube-scheduler-newest-cni-042675" [98395f6b-3670-40f9-a7ca-1e9d5c7c0c4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:53:08.666733  425060 system_pods.go:61] "storage-provisioner" [43ce25b5-99bd-4159-9b8c-efd6ca6d159c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 09:53:08.666739  425060 system_pods.go:74] duration metric: took 3.146795ms to wait for pod list to return data ...
	I1025 09:53:08.666751  425060 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:53:08.668944  425060 default_sa.go:45] found service account: "default"
	I1025 09:53:08.668964  425060 default_sa.go:55] duration metric: took 2.207666ms for default service account to be created ...
	I1025 09:53:08.668976  425060 kubeadm.go:586] duration metric: took 567.062267ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 09:53:08.668999  425060 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:53:08.671708  425060 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:53:08.671730  425060 node_conditions.go:123] node cpu capacity is 8
	I1025 09:53:08.671746  425060 node_conditions.go:105] duration metric: took 2.742194ms to run NodePressure ...
	I1025 09:53:08.671761  425060 start.go:241] waiting for startup goroutines ...
	I1025 09:53:08.893440  425060 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-042675" context rescaled to 1 replicas
	I1025 09:53:08.893484  425060 start.go:246] waiting for cluster config update ...
	I1025 09:53:08.893500  425060 start.go:255] writing updated cluster config ...
	I1025 09:53:08.893828  425060 ssh_runner.go:195] Run: rm -f paused
	I1025 09:53:08.953872  425060 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:53:08.955252  425060 out.go:179] * Done! kubectl is now configured to use "newest-cni-042675" cluster and "default" namespace by default
	I1025 09:53:06.339013  417881 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.473172464s
	I1025 09:53:07.580748  417881 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.715094697s
	I1025 09:53:09.367038  417881 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.501346114s
	I1025 09:53:09.381462  417881 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:53:09.392241  417881 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:53:09.402000  417881 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:53:09.402285  417881 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-656799 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:53:09.411049  417881 kubeadm.go:318] [bootstrap-token] Using token: z8wjjn.00udp146kjoh5szg
	W1025 09:53:06.147067  416663 node_ready.go:57] node "old-k8s-version-676314" has "Ready":"False" status (will retry)
	W1025 09:53:08.148458  416663 node_ready.go:57] node "old-k8s-version-676314" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.107591433Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.111263496Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=47cba3de-846e-47a3-acd5-47911b789372 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.112446411Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=01b16878-03aa-4a72-9b66-139b5a5e5067 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.113076437Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.113977748Z" level=info msg="Ran pod sandbox bee68d97c9f9d048a22c5a92d6be5f9ff6796946e188955a793d1b09a6108928 with infra container: kube-system/kube-proxy-468gg/POD" id=47cba3de-846e-47a3-acd5-47911b789372 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.115909162Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.116307947Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6a11538a-f9fd-40f0-9018-5668c7304937 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.117473605Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=1b8bf0b5-3b42-43ff-a0fe-41211bca5fc3 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.117794768Z" level=info msg="Ran pod sandbox 070b709f1e5b5b1877acae6452221c4c9c4d75fa0c12e4740a992969972628f2 with infra container: kube-system/kindnet-xsn67/POD" id=01b16878-03aa-4a72-9b66-139b5a5e5067 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.119063729Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=7b9812ed-f5ad-477d-b6f1-a3b74c91021b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.120915626Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=4e4bc80e-7129-4029-ae7b-62000de49f92 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.122143766Z" level=info msg="Creating container: kube-system/kube-proxy-468gg/kube-proxy" id=5d81a1ba-a41f-4ca0-ab04-5133b20372ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.122276677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.124301415Z" level=info msg="Creating container: kube-system/kindnet-xsn67/kindnet-cni" id=bbec2f09-f69c-44af-8c39-3a8e9b9a07b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.124417945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.129104047Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.129801359Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.131944165Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.132549273Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.30785818Z" level=info msg="Created container 1de5d1910daa8df87d9dc02d5c58d6578113b0aba82652f6ee5f6e46bcd92b7c: kube-system/kindnet-xsn67/kindnet-cni" id=bbec2f09-f69c-44af-8c39-3a8e9b9a07b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.30966221Z" level=info msg="Starting container: 1de5d1910daa8df87d9dc02d5c58d6578113b0aba82652f6ee5f6e46bcd92b7c" id=b9c6d629-f71d-4d93-a076-a35f93c758e5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.312496568Z" level=info msg="Started container" PID=1578 containerID=1de5d1910daa8df87d9dc02d5c58d6578113b0aba82652f6ee5f6e46bcd92b7c description=kube-system/kindnet-xsn67/kindnet-cni id=b9c6d629-f71d-4d93-a076-a35f93c758e5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=070b709f1e5b5b1877acae6452221c4c9c4d75fa0c12e4740a992969972628f2
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.313244341Z" level=info msg="Created container 656ad5af325c850e87489cefba911fdd904c988b7879fb03bf982964e12f7a76: kube-system/kube-proxy-468gg/kube-proxy" id=5d81a1ba-a41f-4ca0-ab04-5133b20372ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.31392251Z" level=info msg="Starting container: 656ad5af325c850e87489cefba911fdd904c988b7879fb03bf982964e12f7a76" id=6eafbdd2-1f4f-48e8-82fb-d757308765c2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:53:08 newest-cni-042675 crio[772]: time="2025-10-25T09:53:08.317843768Z" level=info msg="Started container" PID=1579 containerID=656ad5af325c850e87489cefba911fdd904c988b7879fb03bf982964e12f7a76 description=kube-system/kube-proxy-468gg/kube-proxy id=6eafbdd2-1f4f-48e8-82fb-d757308765c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bee68d97c9f9d048a22c5a92d6be5f9ff6796946e188955a793d1b09a6108928
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	1de5d1910daa8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   2 seconds ago       Running             kindnet-cni               0                   070b709f1e5b5       kindnet-xsn67                               kube-system
	656ad5af325c8       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   2 seconds ago       Running             kube-proxy                0                   bee68d97c9f9d       kube-proxy-468gg                            kube-system
	0ac0fcf650c59       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   13 seconds ago      Running             etcd                      0                   c5db0a393290f       etcd-newest-cni-042675                      kube-system
	52c78cbbf04ea       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   13 seconds ago      Running             kube-controller-manager   0                   92335514b8dcd       kube-controller-manager-newest-cni-042675   kube-system
	d36d75cf02b15       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   13 seconds ago      Running             kube-apiserver            0                   e883d8f5db8e4       kube-apiserver-newest-cni-042675            kube-system
	1502878f26aef       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   13 seconds ago      Running             kube-scheduler            0                   c3e853cce606b       kube-scheduler-newest-cni-042675            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-042675
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-042675
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=newest-cni-042675
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_53_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:53:00 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-042675
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:53:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:53:02 +0000   Sat, 25 Oct 2025 09:52:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:53:02 +0000   Sat, 25 Oct 2025 09:52:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:53:02 +0000   Sat, 25 Oct 2025 09:52:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 09:53:02 +0000   Sat, 25 Oct 2025 09:52:57 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-042675
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                967fd215-cebb-4af9-b5cd-64a07c73ec38
	  Boot ID:                    69cac88c-fbae-449a-9884-8eb99653f5b9
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-042675                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-xsn67                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-042675             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-042675    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-468gg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-042675             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 1s    kube-proxy       
	  Normal  Starting                 8s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s    kubelet          Node newest-cni-042675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s    kubelet          Node newest-cni-042675 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s    kubelet          Node newest-cni-042675 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s    node-controller  Node newest-cni-042675 event: Registered Node newest-cni-042675 in Controller
	
	
	==> dmesg <==
	[  +0.000024] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[Oct25 09:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[ +17.952906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 b8 8e e3 56 c9 08 06
	[  +0.000656] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[Oct25 09:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	[ +20.335832] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +1.293644] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[Oct25 09:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 68 92 7c c6 14 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +0.270958] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a d0 7b 0e 4a 8d 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[ +10.676024] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000020] ll header: 00000000: ff ff ff ff ff ff 1a 10 31 a9 02 ae 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	
	
	==> etcd [0ac0fcf650c593621cdc0c8ea9205b410fefa70d90ee0d184ed20bca7298ad43] <==
	{"level":"warn","ts":"2025-10-25T09:52:58.940948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:58.952154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:58.960535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:58.969935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:58.977953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:58.988826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:58.999135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:59.008532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:59.034766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:59.052976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:59.072978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:59.093861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:59.108560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:59.114280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:59.123930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:59.133290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:59.145249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:59.158056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:59.166621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:59.175094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:59.188924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:59.194136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:59.204613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:59.217046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:52:59.314108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32942","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:53:10 up  1:35,  0 user,  load average: 6.67, 4.46, 2.60
	Linux newest-cni-042675 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1de5d1910daa8df87d9dc02d5c58d6578113b0aba82652f6ee5f6e46bcd92b7c] <==
	I1025 09:53:08.515968       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:53:08.516284       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1025 09:53:08.516527       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:53:08.516549       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:53:08.516571       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:53:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:53:08.811445       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:53:08.811487       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:53:08.811500       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:53:08.811642       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:53:09.112039       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:53:09.112062       1 metrics.go:72] Registering metrics
	I1025 09:53:09.112109       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [d36d75cf02b153533dfb9f0292bc4f731d532e099876dbfa67dade965366bbed] <==
	I1025 09:53:00.002517       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1025 09:53:00.003991       1 controller.go:667] quota admission added evaluator for: namespaces
	E1025 09:53:00.008232       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1025 09:53:00.008924       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:53:00.011623       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 09:53:00.044180       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:53:00.044749       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:53:00.212795       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:53:00.909040       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 09:53:00.913535       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 09:53:00.913563       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:53:01.507805       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:53:01.551393       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:53:01.608835       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 09:53:01.616594       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1025 09:53:01.617855       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:53:01.622506       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:53:01.935140       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:53:02.591969       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:53:02.602560       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 09:53:02.611073       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:53:07.286593       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:53:07.785361       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1025 09:53:07.888157       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:53:07.893934       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [52c78cbbf04eaf48775777538879b6dbcc798cd2e0cb2307bc663d5fe693f6eb] <==
	I1025 09:53:06.898168       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-042675" podCIDRs=["10.42.0.0/24"]
	I1025 09:53:06.911398       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:53:06.933440       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:53:06.933575       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:53:06.933671       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-042675"
	I1025 09:53:06.933703       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 09:53:06.933730       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:53:06.933778       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:53:06.933990       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 09:53:06.934000       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 09:53:06.934958       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:53:06.934989       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:53:06.935018       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:53:06.935031       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:53:06.935045       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:53:06.935059       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:53:06.935026       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:53:06.936525       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 09:53:06.936556       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:53:06.936562       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:53:06.938312       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:53:06.939755       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:53:06.940983       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:53:06.954356       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:53:06.961714       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [656ad5af325c850e87489cefba911fdd904c988b7879fb03bf982964e12f7a76] <==
	I1025 09:53:08.363701       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:53:08.432709       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:53:08.533425       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:53:08.533474       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1025 09:53:08.533681       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:53:08.558150       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:53:08.558203       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:53:08.565010       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:53:08.565392       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:53:08.565487       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:53:08.567156       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:53:08.567177       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:53:08.567205       1 config.go:200] "Starting service config controller"
	I1025 09:53:08.567210       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:53:08.567226       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:53:08.567234       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:53:08.567533       1 config.go:309] "Starting node config controller"
	I1025 09:53:08.567550       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:53:08.567557       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:53:08.668240       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:53:08.668291       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:53:08.668420       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1502878f26aefcb4c10d8cbc4129327fdcbcb35daab12db459d2b917dcfb3749] <==
	E1025 09:52:59.972547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:52:59.972590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:52:59.972562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:52:59.972678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:52:59.972778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:52:59.973461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:52:59.973456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:52:59.973587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:53:00.780434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:53:00.789185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:53:00.800065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:53:00.805560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:53:00.817518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:53:00.831184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:53:00.837231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:53:00.907138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:53:00.916551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:53:01.074374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:53:01.169819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:53:01.180462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:53:01.212591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:53:01.270760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:53:01.282205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:53:01.473196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1025 09:53:03.767305       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:53:03 newest-cni-042675 kubelet[1313]: I1025 09:53:03.406158    1313 apiserver.go:52] "Watching apiserver"
	Oct 25 09:53:03 newest-cni-042675 kubelet[1313]: I1025 09:53:03.412627    1313 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 09:53:03 newest-cni-042675 kubelet[1313]: I1025 09:53:03.453049    1313 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-042675"
	Oct 25 09:53:03 newest-cni-042675 kubelet[1313]: I1025 09:53:03.453267    1313 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-042675"
	Oct 25 09:53:03 newest-cni-042675 kubelet[1313]: I1025 09:53:03.453292    1313 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-042675"
	Oct 25 09:53:03 newest-cni-042675 kubelet[1313]: I1025 09:53:03.453408    1313 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-042675"
	Oct 25 09:53:03 newest-cni-042675 kubelet[1313]: E1025 09:53:03.468809    1313 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-042675\" already exists" pod="kube-system/etcd-newest-cni-042675"
	Oct 25 09:53:03 newest-cni-042675 kubelet[1313]: E1025 09:53:03.471115    1313 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-042675\" already exists" pod="kube-system/kube-controller-manager-newest-cni-042675"
	Oct 25 09:53:03 newest-cni-042675 kubelet[1313]: E1025 09:53:03.471612    1313 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-042675\" already exists" pod="kube-system/kube-scheduler-newest-cni-042675"
	Oct 25 09:53:03 newest-cni-042675 kubelet[1313]: E1025 09:53:03.483058    1313 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-042675\" already exists" pod="kube-system/kube-apiserver-newest-cni-042675"
	Oct 25 09:53:03 newest-cni-042675 kubelet[1313]: I1025 09:53:03.510915    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-042675" podStartSLOduration=2.510893736 podStartE2EDuration="2.510893736s" podCreationTimestamp="2025-10-25 09:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:53:03.510756735 +0000 UTC m=+1.174309963" watchObservedRunningTime="2025-10-25 09:53:03.510893736 +0000 UTC m=+1.174446966"
	Oct 25 09:53:03 newest-cni-042675 kubelet[1313]: I1025 09:53:03.532832    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-042675" podStartSLOduration=1.532809317 podStartE2EDuration="1.532809317s" podCreationTimestamp="2025-10-25 09:53:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:53:03.52381648 +0000 UTC m=+1.187369702" watchObservedRunningTime="2025-10-25 09:53:03.532809317 +0000 UTC m=+1.196362547"
	Oct 25 09:53:03 newest-cni-042675 kubelet[1313]: I1025 09:53:03.542229    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-042675" podStartSLOduration=1.542209757 podStartE2EDuration="1.542209757s" podCreationTimestamp="2025-10-25 09:53:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:53:03.533072488 +0000 UTC m=+1.196625698" watchObservedRunningTime="2025-10-25 09:53:03.542209757 +0000 UTC m=+1.205762978"
	Oct 25 09:53:03 newest-cni-042675 kubelet[1313]: I1025 09:53:03.554563    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-042675" podStartSLOduration=1.5545402959999999 podStartE2EDuration="1.554540296s" podCreationTimestamp="2025-10-25 09:53:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:53:03.542557375 +0000 UTC m=+1.206110605" watchObservedRunningTime="2025-10-25 09:53:03.554540296 +0000 UTC m=+1.218093526"
	Oct 25 09:53:06 newest-cni-042675 kubelet[1313]: I1025 09:53:06.969134    1313 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 25 09:53:06 newest-cni-042675 kubelet[1313]: I1025 09:53:06.969852    1313 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 25 09:53:07 newest-cni-042675 kubelet[1313]: I1025 09:53:07.856391    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3-xtables-lock\") pod \"kindnet-xsn67\" (UID: \"6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3\") " pod="kube-system/kindnet-xsn67"
	Oct 25 09:53:07 newest-cni-042675 kubelet[1313]: I1025 09:53:07.856443    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7360d3df-fd12-429c-b79f-f8a744d0de49-kube-proxy\") pod \"kube-proxy-468gg\" (UID: \"7360d3df-fd12-429c-b79f-f8a744d0de49\") " pod="kube-system/kube-proxy-468gg"
	Oct 25 09:53:07 newest-cni-042675 kubelet[1313]: I1025 09:53:07.856458    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7360d3df-fd12-429c-b79f-f8a744d0de49-xtables-lock\") pod \"kube-proxy-468gg\" (UID: \"7360d3df-fd12-429c-b79f-f8a744d0de49\") " pod="kube-system/kube-proxy-468gg"
	Oct 25 09:53:07 newest-cni-042675 kubelet[1313]: I1025 09:53:07.856472    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3-cni-cfg\") pod \"kindnet-xsn67\" (UID: \"6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3\") " pod="kube-system/kindnet-xsn67"
	Oct 25 09:53:07 newest-cni-042675 kubelet[1313]: I1025 09:53:07.856492    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7360d3df-fd12-429c-b79f-f8a744d0de49-lib-modules\") pod \"kube-proxy-468gg\" (UID: \"7360d3df-fd12-429c-b79f-f8a744d0de49\") " pod="kube-system/kube-proxy-468gg"
	Oct 25 09:53:07 newest-cni-042675 kubelet[1313]: I1025 09:53:07.856508    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7lf7\" (UniqueName: \"kubernetes.io/projected/7360d3df-fd12-429c-b79f-f8a744d0de49-kube-api-access-v7lf7\") pod \"kube-proxy-468gg\" (UID: \"7360d3df-fd12-429c-b79f-f8a744d0de49\") " pod="kube-system/kube-proxy-468gg"
	Oct 25 09:53:07 newest-cni-042675 kubelet[1313]: I1025 09:53:07.856529    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3-lib-modules\") pod \"kindnet-xsn67\" (UID: \"6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3\") " pod="kube-system/kindnet-xsn67"
	Oct 25 09:53:07 newest-cni-042675 kubelet[1313]: I1025 09:53:07.856542    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rw86\" (UniqueName: \"kubernetes.io/projected/6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3-kube-api-access-8rw86\") pod \"kindnet-xsn67\" (UID: \"6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3\") " pod="kube-system/kindnet-xsn67"
	Oct 25 09:53:08 newest-cni-042675 kubelet[1313]: I1025 09:53:08.511020    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-xsn67" podStartSLOduration=1.510997285 podStartE2EDuration="1.510997285s" podCreationTimestamp="2025-10-25 09:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:53:08.494084789 +0000 UTC m=+6.157638022" watchObservedRunningTime="2025-10-25 09:53:08.510997285 +0000 UTC m=+6.174550515"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-042675 -n newest-cni-042675
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-042675 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-v4xpv storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-042675 describe pod coredns-66bc5c9577-v4xpv storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-042675 describe pod coredns-66bc5c9577-v4xpv storage-provisioner: exit status 1 (59.337956ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-v4xpv" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-042675 describe pod coredns-66bc5c9577-v4xpv storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-676314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-676314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (264.502857ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:53:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-676314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-676314 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-676314 describe deploy/metrics-server -n kube-system: exit status 1 (60.21092ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-676314 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-676314
helpers_test.go:243: (dbg) docker inspect old-k8s-version-676314:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7",
	        "Created": "2025-10-25T09:52:30.302289758Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 420212,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:52:30.35600531Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7/hosts",
	        "LogPath": "/var/lib/docker/containers/05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7/05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7-json.log",
	        "Name": "/old-k8s-version-676314",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-676314:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-676314",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7",
	                "LowerDir": "/var/lib/docker/overlay2/ee55f66edc956ba04d8a48ac2f58334c6be8a80c382de1ca530ee94ac23a8ce7-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ee55f66edc956ba04d8a48ac2f58334c6be8a80c382de1ca530ee94ac23a8ce7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ee55f66edc956ba04d8a48ac2f58334c6be8a80c382de1ca530ee94ac23a8ce7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ee55f66edc956ba04d8a48ac2f58334c6be8a80c382de1ca530ee94ac23a8ce7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-676314",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-676314/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-676314",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-676314",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-676314",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0fe3c0ed1dc72a8fc8a66597370e68566f43797c3d9b854b73adf4273e40125a",
	            "SandboxKey": "/var/run/docker/netns/0fe3c0ed1dc7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33215"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33216"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33219"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33217"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33218"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-676314": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:cd:82:b6:6a:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f66217c06b76e94123bb60007cf891525ec1407362c18c5530791b0803181dbc",
	                    "EndpointID": "ed08aaf0ca04033ebf20d77c22599d944cdf424ef1d57b8c616f8a7233e4efc8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-676314",
	                        "05255cf7a9be"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-676314 -n old-k8s-version-676314
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-676314 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-676314 logs -n 25: (1.111598809s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p enable-default-cni-035825 sudo cat /etc/docker/daemon.json                                                                                                                                                                                 │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p enable-default-cni-035825 sudo docker system info                                                                                                                                                                                          │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                         │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                         │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                    │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p enable-default-cni-035825 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                              │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo cri-dockerd --version                                                                                                                                                                                       │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                         │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-129588                                                                                                                                                                                                                  │ kubernetes-upgrade-129588    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl cat containerd --no-pager                                                                                                                                                                         │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                  │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo cat /etc/containerd/config.toml                                                                                                                                                                             │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo containerd config dump                                                                                                                                                                                      │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl status crio --all --full --no-pager                                                                                                                                                               │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl cat crio --no-pager                                                                                                                                                                               │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                     │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo crio config                                                                                                                                                                                                 │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ start   │ -p default-k8s-diff-port-880773 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │                     │
	│ delete  │ -p enable-default-cni-035825                                                                                                                                                                                                                  │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ start   │ -p newest-cni-042675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-042675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ stop    │ -p newest-cni-042675 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-042675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p newest-cni-042675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-676314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:53:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:53:19.773511  434603 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:53:19.773777  434603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:53:19.773788  434603 out.go:374] Setting ErrFile to fd 2...
	I1025 09:53:19.773794  434603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:53:19.773993  434603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:53:19.774486  434603 out.go:368] Setting JSON to false
	I1025 09:53:19.775873  434603 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5744,"bootTime":1761380256,"procs":405,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:53:19.775964  434603 start.go:141] virtualization: kvm guest
	I1025 09:53:19.778007  434603 out.go:179] * [newest-cni-042675] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:53:19.779678  434603 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:53:19.779677  434603 notify.go:220] Checking for updates...
	I1025 09:53:19.781023  434603 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:53:19.782319  434603 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:53:19.783610  434603 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:53:19.784802  434603 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:53:19.786012  434603 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:53:19.787722  434603 config.go:182] Loaded profile config "newest-cni-042675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:53:19.788279  434603 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:53:19.811602  434603 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:53:19.811685  434603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:53:19.870733  434603 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-25 09:53:19.859806551 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:53:19.870859  434603 docker.go:318] overlay module found
	I1025 09:53:19.872622  434603 out.go:179] * Using the docker driver based on existing profile
	I1025 09:53:19.873853  434603 start.go:305] selected driver: docker
	I1025 09:53:19.873867  434603 start.go:925] validating driver "docker" against &{Name:newest-cni-042675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-042675 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:53:19.873956  434603 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:53:19.874618  434603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:53:19.933915  434603 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-25 09:53:19.923450647 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:53:19.934265  434603 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 09:53:19.934299  434603 cni.go:84] Creating CNI manager for ""
	I1025 09:53:19.934362  434603 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:53:19.934413  434603 start.go:349] cluster config:
	{Name:newest-cni-042675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-042675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:53:19.936710  434603 out.go:179] * Starting "newest-cni-042675" primary control-plane node in "newest-cni-042675" cluster
	I1025 09:53:19.937738  434603 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:53:19.938786  434603 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:53:19.939853  434603 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:53:19.939887  434603 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:53:19.939903  434603 cache.go:58] Caching tarball of preloaded images
	I1025 09:53:19.939972  434603 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:53:19.939990  434603 preload.go:233] Found /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:53:19.940011  434603 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:53:19.940113  434603 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/config.json ...
	I1025 09:53:19.961461  434603 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:53:19.961488  434603 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:53:19.961504  434603 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:53:19.961533  434603 start.go:360] acquireMachinesLock for newest-cni-042675: {Name:mk7919472b767e9cb704209265f0c08926368ab3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:53:19.961631  434603 start.go:364] duration metric: took 75.533µs to acquireMachinesLock for "newest-cni-042675"
	I1025 09:53:19.961656  434603 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:53:19.961663  434603 fix.go:54] fixHost starting: 
	I1025 09:53:19.961915  434603 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:53:19.979722  434603 fix.go:112] recreateIfNeeded on newest-cni-042675: state=Stopped err=<nil>
	W1025 09:53:19.979756  434603 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 09:53:17.532535  417881 node_ready.go:57] node "no-preload-656799" has "Ready":"False" status (will retry)
	W1025 09:53:20.032444  417881 node_ready.go:57] node "no-preload-656799" has "Ready":"False" status (will retry)
	W1025 09:53:19.550089  423245 node_ready.go:57] node "default-k8s-diff-port-880773" has "Ready":"False" status (will retry)
	W1025 09:53:22.049980  423245 node_ready.go:57] node "default-k8s-diff-port-880773" has "Ready":"False" status (will retry)
	I1025 09:53:19.981655  434603 out.go:252] * Restarting existing docker container for "newest-cni-042675" ...
	I1025 09:53:19.981738  434603 cli_runner.go:164] Run: docker start newest-cni-042675
	I1025 09:53:20.244452  434603 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:53:20.263132  434603 kic.go:430] container "newest-cni-042675" state is running.
	I1025 09:53:20.263663  434603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042675
	I1025 09:53:20.282999  434603 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/config.json ...
	I1025 09:53:20.283222  434603 machine.go:93] provisionDockerMachine start ...
	I1025 09:53:20.283290  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:20.302362  434603 main.go:141] libmachine: Using SSH client type: native
	I1025 09:53:20.302643  434603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33230 <nil> <nil>}
	I1025 09:53:20.302656  434603 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:53:20.303404  434603 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42078->127.0.0.1:33230: read: connection reset by peer
	I1025 09:53:23.445660  434603 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-042675
	
	I1025 09:53:23.445687  434603 ubuntu.go:182] provisioning hostname "newest-cni-042675"
	I1025 09:53:23.445755  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:23.464263  434603 main.go:141] libmachine: Using SSH client type: native
	I1025 09:53:23.464618  434603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33230 <nil> <nil>}
	I1025 09:53:23.464638  434603 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-042675 && echo "newest-cni-042675" | sudo tee /etc/hostname
	I1025 09:53:23.617144  434603 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-042675
	
	I1025 09:53:23.617206  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:23.637112  434603 main.go:141] libmachine: Using SSH client type: native
	I1025 09:53:23.637321  434603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33230 <nil> <nil>}
	I1025 09:53:23.637338  434603 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-042675' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-042675/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-042675' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:53:23.779190  434603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:53:23.779218  434603 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 09:53:23.779240  434603 ubuntu.go:190] setting up certificates
	I1025 09:53:23.779252  434603 provision.go:84] configureAuth start
	I1025 09:53:23.779310  434603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042675
	I1025 09:53:23.797637  434603 provision.go:143] copyHostCerts
	I1025 09:53:23.797724  434603 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem, removing ...
	I1025 09:53:23.797745  434603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:53:23.797826  434603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 09:53:23.797982  434603 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem, removing ...
	I1025 09:53:23.797996  434603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:53:23.798043  434603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 09:53:23.798193  434603 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem, removing ...
	I1025 09:53:23.798205  434603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:53:23.798249  434603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 09:53:23.798339  434603 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.newest-cni-042675 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-042675]
	I1025 09:53:23.990329  434603 provision.go:177] copyRemoteCerts
	I1025 09:53:23.990395  434603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:53:23.990428  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:24.008847  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:24.108756  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:53:24.126966  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 09:53:24.145322  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:53:24.163049  434603 provision.go:87] duration metric: took 383.782933ms to configureAuth
	I1025 09:53:24.163079  434603 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:53:24.163260  434603 config.go:182] Loaded profile config "newest-cni-042675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:53:24.163377  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:24.181273  434603 main.go:141] libmachine: Using SSH client type: native
	I1025 09:53:24.181542  434603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33230 <nil> <nil>}
	I1025 09:53:24.181568  434603 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:53:24.454606  434603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:53:24.454634  434603 machine.go:96] duration metric: took 4.171394685s to provisionDockerMachine
	I1025 09:53:24.454647  434603 start.go:293] postStartSetup for "newest-cni-042675" (driver="docker")
	I1025 09:53:24.454660  434603 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:53:24.454735  434603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:53:24.454788  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:24.475283  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:24.578292  434603 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:53:24.582207  434603 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:53:24.582234  434603 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:53:24.582244  434603 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 09:53:24.582297  434603 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 09:53:24.582430  434603 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> 1341452.pem in /etc/ssl/certs
	I1025 09:53:24.582597  434603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:53:24.590297  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:53:24.608243  434603 start.go:296] duration metric: took 153.577737ms for postStartSetup
	I1025 09:53:24.608370  434603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:53:24.608427  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:24.626241  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:24.724259  434603 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:53:24.729324  434603 fix.go:56] duration metric: took 4.767655075s for fixHost
	I1025 09:53:24.729386  434603 start.go:83] releasing machines lock for "newest-cni-042675", held for 4.767735987s
	I1025 09:53:24.729491  434603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042675
	I1025 09:53:24.748071  434603 ssh_runner.go:195] Run: cat /version.json
	I1025 09:53:24.748139  434603 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:53:24.748223  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:24.748143  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:24.768083  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:24.768476  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:24.925089  434603 ssh_runner.go:195] Run: systemctl --version
	I1025 09:53:24.932469  434603 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:53:24.967868  434603 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:53:24.972763  434603 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:53:24.972823  434603 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:53:24.981758  434603 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:53:24.981780  434603 start.go:495] detecting cgroup driver to use...
	I1025 09:53:24.981813  434603 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:53:24.981878  434603 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:53:24.996854  434603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:53:25.009584  434603 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:53:25.009649  434603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:53:25.023923  434603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:53:25.037770  434603 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:53:25.138823  434603 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:53:25.235687  434603 docker.go:234] disabling docker service ...
	I1025 09:53:25.235757  434603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:53:25.252330  434603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:53:25.267278  434603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:53:25.354057  434603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:53:25.441625  434603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:53:25.456619  434603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:53:25.473379  434603 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:53:25.473454  434603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.484228  434603 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:53:25.484289  434603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.493990  434603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.503472  434603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.513233  434603 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:53:25.522416  434603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.532563  434603 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.541834  434603 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.552718  434603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:53:25.562386  434603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:53:25.570830  434603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:53:25.663161  434603 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:53:25.768763  434603 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:53:25.768828  434603 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:53:25.773115  434603 start.go:563] Will wait 60s for crictl version
	I1025 09:53:25.773178  434603 ssh_runner.go:195] Run: which crictl
	I1025 09:53:25.777144  434603 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:53:25.803639  434603 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:53:25.803716  434603 ssh_runner.go:195] Run: crio --version
	I1025 09:53:25.835380  434603 ssh_runner.go:195] Run: crio --version
	I1025 09:53:25.872240  434603 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:53:25.873393  434603 cli_runner.go:164] Run: docker network inspect newest-cni-042675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:53:25.892746  434603 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 09:53:25.896824  434603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:53:25.908783  434603 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Oct 25 09:53:13 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:13.268218765Z" level=info msg="Starting container: b75551e6226789ea0b57314cc1af99cff0e3bb22e84715858b348326d9b0e15a" id=f9746372-954b-49f6-9ab9-7a1c04a0353a name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:53:13 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:13.27007284Z" level=info msg="Started container" PID=2145 containerID=b75551e6226789ea0b57314cc1af99cff0e3bb22e84715858b348326d9b0e15a description=kube-system/coredns-5dd5756b68-qffxt/coredns id=f9746372-954b-49f6-9ab9-7a1c04a0353a name=/runtime.v1.RuntimeService/StartContainer sandboxID=594e39250e3377c068b44a8032162e69da4be7ff5a3f42629da00c1a6a758b3e
	Oct 25 09:53:16 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:16.381550858Z" level=info msg="Running pod sandbox: default/busybox/POD" id=009e47fc-9ba1-4f8d-98f0-6a3068c3604e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:53:16 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:16.381662793Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:16 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:16.387333977Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7c34a7265975eadbebed1ba83c59f5542c3da7a6b2c36d41cdcbeff43bf6ce91 UID:f284177c-1d8d-4d46-8b15-3d8cb988f9d5 NetNS:/var/run/netns/47c3c8a3-605a-4999-9801-28fe3826c165 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000806ae8}] Aliases:map[]}"
	Oct 25 09:53:16 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:16.387388152Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:53:16 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:16.397300165Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7c34a7265975eadbebed1ba83c59f5542c3da7a6b2c36d41cdcbeff43bf6ce91 UID:f284177c-1d8d-4d46-8b15-3d8cb988f9d5 NetNS:/var/run/netns/47c3c8a3-605a-4999-9801-28fe3826c165 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000806ae8}] Aliases:map[]}"
	Oct 25 09:53:16 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:16.397464151Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 09:53:16 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:16.39825763Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:53:16 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:16.399409813Z" level=info msg="Ran pod sandbox 7c34a7265975eadbebed1ba83c59f5542c3da7a6b2c36d41cdcbeff43bf6ce91 with infra container: default/busybox/POD" id=009e47fc-9ba1-4f8d-98f0-6a3068c3604e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:53:16 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:16.400632699Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ab09b8b6-8bd5-4ca1-b181-4aaf56f7134d name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:16 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:16.400743141Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ab09b8b6-8bd5-4ca1-b181-4aaf56f7134d name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:16 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:16.400780818Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ab09b8b6-8bd5-4ca1-b181-4aaf56f7134d name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:16 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:16.401232308Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dd4c093b-79f8-44dc-bd1f-eb964a2bdce6 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:53:16 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:16.403858315Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 09:53:18 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:18.431906268Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=dd4c093b-79f8-44dc-bd1f-eb964a2bdce6 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:53:18 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:18.432927135Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ea1795b6-2ec9-49f0-ab41-9d208682eee5 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:18 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:18.434572508Z" level=info msg="Creating container: default/busybox/busybox" id=96f0e4d7-ebd7-4b6b-9063-d50719503284 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:53:18 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:18.434682766Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:18 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:18.439428517Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:18 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:18.439883676Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:18 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:18.476236457Z" level=info msg="Created container c77b01a889dd29d0af81ab07062c0ecbf00bff80b4713fcf65846ee680d17739: default/busybox/busybox" id=96f0e4d7-ebd7-4b6b-9063-d50719503284 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:53:18 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:18.476894953Z" level=info msg="Starting container: c77b01a889dd29d0af81ab07062c0ecbf00bff80b4713fcf65846ee680d17739" id=d5be61de-eaff-4698-bdc1-1bd8860af8b7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:53:18 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:18.478535375Z" level=info msg="Started container" PID=2223 containerID=c77b01a889dd29d0af81ab07062c0ecbf00bff80b4713fcf65846ee680d17739 description=default/busybox/busybox id=d5be61de-eaff-4698-bdc1-1bd8860af8b7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7c34a7265975eadbebed1ba83c59f5542c3da7a6b2c36d41cdcbeff43bf6ce91
	Oct 25 09:53:25 old-k8s-version-676314 crio[799]: time="2025-10-25T09:53:25.178714133Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	c77b01a889dd2       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   7c34a7265975e       busybox                                          default
	b75551e622678       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 seconds ago      Running             coredns                   0                   594e39250e337       coredns-5dd5756b68-qffxt                         kube-system
	6e19f5c2d024e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   4fa5a96b9997b       storage-provisioner                              kube-system
	822f3437080e6       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   27aa92a367be7       kindnet-5hnxc                                    kube-system
	0108e86e98ed9       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      27 seconds ago      Running             kube-proxy                0                   621b13c7f1f9d       kube-proxy-bsxx6                                 kube-system
	21605ce54d0f4       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      45 seconds ago      Running             etcd                      0                   b82fc2a37661a       etcd-old-k8s-version-676314                      kube-system
	a916588eceef7       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      45 seconds ago      Running             kube-apiserver            0                   bd34596b2d160       kube-apiserver-old-k8s-version-676314            kube-system
	7ed1e42ba0677       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      45 seconds ago      Running             kube-controller-manager   0                   a9242eaf85aa0       kube-controller-manager-old-k8s-version-676314   kube-system
	a630a21420a6f       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      45 seconds ago      Running             kube-scheduler            0                   51c631348b332       kube-scheduler-old-k8s-version-676314            kube-system
	
	
	==> coredns [b75551e6226789ea0b57314cc1af99cff0e3bb22e84715858b348326d9b0e15a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46308 - 32830 "HINFO IN 7692331754904658330.4102065983793319280. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018444113s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-676314
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-676314
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=old-k8s-version-676314
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_52_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:52:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-676314
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:53:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:53:17 +0000   Sat, 25 Oct 2025 09:52:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:53:17 +0000   Sat, 25 Oct 2025 09:52:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:53:17 +0000   Sat, 25 Oct 2025 09:52:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:53:17 +0000   Sat, 25 Oct 2025 09:53:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-676314
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                91553f51-64a8-4128-a815-5ed176c5ea05
	  Boot ID:                    69cac88c-fbae-449a-9884-8eb99653f5b9
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-qffxt                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-old-k8s-version-676314                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-5hnxc                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-676314             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-676314    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-bsxx6                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-676314             100m (1%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 40s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s   kubelet          Node old-k8s-version-676314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet          Node old-k8s-version-676314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet          Node old-k8s-version-676314 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-676314 event: Registered Node old-k8s-version-676314 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-676314 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000024] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[Oct25 09:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[ +17.952906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 b8 8e e3 56 c9 08 06
	[  +0.000656] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[Oct25 09:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	[ +20.335832] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +1.293644] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[Oct25 09:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 68 92 7c c6 14 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +0.270958] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a d0 7b 0e 4a 8d 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[ +10.676024] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000020] ll header: 00000000: ff ff ff ff ff ff 1a 10 31 a9 02 ae 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	
	
	==> etcd [21605ce54d0f439e8fabef1cb0627a4526362ccf2656657a17513c570449c8c2] <==
	{"level":"warn","ts":"2025-10-25T09:52:43.919907Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"364.26147ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:350"}
	{"level":"warn","ts":"2025-10-25T09:52:43.92093Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"294.720807ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/old-k8s-version-676314\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-25T09:52:43.920943Z","caller":"traceutil/trace.go:171","msg":"trace[2142887542] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:18; }","duration":"365.304799ms","start":"2025-10-25T09:52:43.555629Z","end":"2025-10-25T09:52:43.920934Z","steps":["trace[2142887542] 'agreement among raft nodes before linearized reading'  (duration: 363.991944ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:52:43.920963Z","caller":"traceutil/trace.go:171","msg":"trace[677457329] range","detail":"{range_begin:/registry/csinodes/old-k8s-version-676314; range_end:; response_count:0; response_revision:19; }","duration":"294.757965ms","start":"2025-10-25T09:52:43.626196Z","end":"2025-10-25T09:52:43.920954Z","steps":["trace[677457329] 'agreement among raft nodes before linearized reading'  (duration: 294.668067ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:52:43.92097Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T09:52:43.55561Z","time spent":"365.35088ms","remote":"127.0.0.1:51012","response type":"/etcdserverpb.KV/Range","request count":0,"request size":34,"response count":1,"response size":373,"request content":"key:\"/registry/namespaces/kube-system\" "}
	{"level":"warn","ts":"2025-10-25T09:52:43.91993Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"415.014891ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/csr-bhp7r\" ","response":"range_response_count:1 size:895"}
	{"level":"info","ts":"2025-10-25T09:52:43.921019Z","caller":"traceutil/trace.go:171","msg":"trace[1249802250] range","detail":"{range_begin:/registry/certificatesigningrequests/csr-bhp7r; range_end:; response_count:1; response_revision:18; }","duration":"416.105103ms","start":"2025-10-25T09:52:43.504905Z","end":"2025-10-25T09:52:43.921011Z","steps":["trace[1249802250] 'agreement among raft nodes before linearized reading'  (duration: 414.599266ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:52:43.921046Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-10-25T09:52:43.504895Z","time spent":"416.14338ms","remote":"127.0.0.1:51156","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":918,"request content":"key:\"/registry/certificatesigningrequests/csr-bhp7r\" "}
	{"level":"info","ts":"2025-10-25T09:52:43.920707Z","caller":"traceutil/trace.go:171","msg":"trace[1702130768] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:19; }","duration":"295.544406ms","start":"2025-10-25T09:52:43.625153Z","end":"2025-10-25T09:52:43.920698Z","steps":["trace[1702130768] 'agreement among raft nodes before linearized reading'  (duration: 295.444718ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:52:44.116385Z","caller":"traceutil/trace.go:171","msg":"trace[1147328777] transaction","detail":"{read_only:false; response_revision:29; number_of_response:1; }","duration":"185.4479ms","start":"2025-10-25T09:52:43.930876Z","end":"2025-10-25T09:52:44.116323Z","steps":["trace[1147328777] 'process raft request'  (duration: 185.385395ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:52:44.116444Z","caller":"traceutil/trace.go:171","msg":"trace[1425000041] transaction","detail":"{read_only:false; response_revision:31; number_of_response:1; }","duration":"184.891782ms","start":"2025-10-25T09:52:43.93153Z","end":"2025-10-25T09:52:44.116422Z","steps":["trace[1425000041] 'process raft request'  (duration: 184.775585ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:52:44.116443Z","caller":"traceutil/trace.go:171","msg":"trace[1078381855] linearizableReadLoop","detail":"{readStateIndex:37; appliedIndex:31; }","duration":"182.810009ms","start":"2025-10-25T09:52:43.93362Z","end":"2025-10-25T09:52:44.11643Z","steps":["trace[1078381855] 'read index received'  (duration: 141.328444ms)","trace[1078381855] 'applied index is now lower than readState.Index'  (duration: 41.480959ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T09:52:44.116456Z","caller":"traceutil/trace.go:171","msg":"trace[230083105] transaction","detail":"{read_only:false; response_revision:28; number_of_response:1; }","duration":"185.563213ms","start":"2025-10-25T09:52:43.930875Z","end":"2025-10-25T09:52:44.116438Z","steps":["trace[230083105] 'process raft request'  (duration: 144.050337ms)","trace[230083105] 'compare'  (duration: 41.218895ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T09:52:44.11648Z","caller":"traceutil/trace.go:171","msg":"trace[1806900323] transaction","detail":"{read_only:false; response_revision:30; number_of_response:1; }","duration":"185.422629ms","start":"2025-10-25T09:52:43.931052Z","end":"2025-10-25T09:52:44.116475Z","steps":["trace[1806900323] 'process raft request'  (duration: 185.228867ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:52:44.116495Z","caller":"traceutil/trace.go:171","msg":"trace[187215328] transaction","detail":"{read_only:false; response_revision:33; number_of_response:1; }","duration":"184.207603ms","start":"2025-10-25T09:52:43.932282Z","end":"2025-10-25T09:52:44.11649Z","steps":["trace[187215328] 'process raft request'  (duration: 184.111983ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:52:44.116501Z","caller":"traceutil/trace.go:171","msg":"trace[989617643] transaction","detail":"{read_only:false; response_revision:32; number_of_response:1; }","duration":"184.651245ms","start":"2025-10-25T09:52:43.931844Z","end":"2025-10-25T09:52:44.116496Z","steps":["trace[989617643] 'process raft request'  (duration: 184.48189ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:52:44.116582Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.954691ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-25T09:52:44.117067Z","caller":"traceutil/trace.go:171","msg":"trace[2066121040] range","detail":"{range_begin:/registry/limitranges/kube-system/; range_end:/registry/limitranges/kube-system0; response_count:0; response_revision:33; }","duration":"185.452304ms","start":"2025-10-25T09:52:43.9316Z","end":"2025-10-25T09:52:44.117052Z","steps":["trace[2066121040] 'agreement among raft nodes before linearized reading'  (duration: 184.873171ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:52:44.117217Z","caller":"traceutil/trace.go:171","msg":"trace[186061241] transaction","detail":"{read_only:false; response_revision:34; number_of_response:1; }","duration":"182.237786ms","start":"2025-10-25T09:52:43.934967Z","end":"2025-10-25T09:52:44.117205Z","steps":["trace[186061241] 'process raft request'  (duration: 182.130813ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:52:44.117281Z","caller":"traceutil/trace.go:171","msg":"trace[1281792677] transaction","detail":"{read_only:false; response_revision:35; number_of_response:1; }","duration":"181.755651ms","start":"2025-10-25T09:52:43.935516Z","end":"2025-10-25T09:52:44.117272Z","steps":["trace[1281792677] 'process raft request'  (duration: 181.634199ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:52:44.117314Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.430609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-node-lease\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-25T09:52:44.117376Z","caller":"traceutil/trace.go:171","msg":"trace[93494987] range","detail":"{range_begin:/registry/namespaces/kube-node-lease; range_end:; response_count:0; response_revision:36; }","duration":"139.479099ms","start":"2025-10-25T09:52:43.977864Z","end":"2025-10-25T09:52:44.117343Z","steps":["trace[93494987] 'agreement among raft nodes before linearized reading'  (duration: 139.353294ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:52:44.117456Z","caller":"traceutil/trace.go:171","msg":"trace[1719645648] transaction","detail":"{read_only:false; response_revision:36; number_of_response:1; }","duration":"181.581571ms","start":"2025-10-25T09:52:43.935864Z","end":"2025-10-25T09:52:44.117446Z","steps":["trace[1719645648] 'process raft request'  (duration: 181.322666ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:52:52.953503Z","caller":"traceutil/trace.go:171","msg":"trace[497776302] transaction","detail":"{read_only:false; response_revision:274; number_of_response:1; }","duration":"177.055824ms","start":"2025-10-25T09:52:52.776419Z","end":"2025-10-25T09:52:52.953475Z","steps":["trace[497776302] 'process raft request'  (duration: 176.89185ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:53:26 up  1:35,  0 user,  load average: 5.27, 4.26, 2.57
	Linux old-k8s-version-676314 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [822f3437080e64c0dad28514d701c93728470f28d2f1b4e8b823b209c9af1fb0] <==
	I1025 09:53:02.110822       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:53:02.111060       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 09:53:02.111189       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:53:02.111203       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:53:02.111224       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:53:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:53:02.313690       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:53:02.313733       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:53:02.313748       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:53:02.313952       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:53:02.813827       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:53:02.813874       1 metrics.go:72] Registering metrics
	I1025 09:53:02.813956       1 controller.go:711] "Syncing nftables rules"
	I1025 09:53:12.314618       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:53:12.314661       1 main.go:301] handling current node
	I1025 09:53:22.314570       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:53:22.314599       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a916588eceef74a4348547159bac2dae9656dfe637edb1e867c2b09f99850d62] <==
	I1025 09:52:43.455595       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1025 09:52:43.479845       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 09:52:43.479908       1 shared_informer.go:318] Caches are synced for configmaps
	I1025 09:52:43.480470       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1025 09:52:43.481027       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:52:43.481892       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 09:52:43.923502       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:52:43.926928       1 trace.go:236] Trace[304263040]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:3a6ce6f4-c6b6-480c-b775-c9c175a994de,client:::1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases,user-agent:kube-apiserver/v1.28.0 (linux/amd64) kubernetes/855e7c4,verb:POST (25-Oct-2025 09:52:43.402) (total time: 524ms):
	Trace[304263040]: [524.391841ms] [524.391841ms] END
	I1025 09:52:44.388995       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 09:52:44.396834       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 09:52:44.396949       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:52:44.948901       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:52:45.011329       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:52:45.217413       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 09:52:45.225480       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1025 09:52:45.226992       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 09:52:45.232604       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:52:45.428141       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 09:52:46.645084       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 09:52:46.656590       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 09:52:46.673256       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1025 09:52:58.985471       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1025 09:52:58.985471       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1025 09:52:59.138280       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [7ed1e42ba06778cc0c61927c5fd88c28cd6bf40c5f20de9acdca0ad1109fed2b] <==
	I1025 09:52:58.425427       1 shared_informer.go:318] Caches are synced for attach detach
	I1025 09:52:58.431582       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1025 09:52:58.458082       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1025 09:52:58.479602       1 shared_informer.go:318] Caches are synced for resource quota
	I1025 09:52:58.824534       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 09:52:58.827500       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 09:52:58.834972       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 09:52:59.018463       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5hnxc"
	I1025 09:52:59.023336       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bsxx6"
	I1025 09:52:59.148826       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1025 09:52:59.297492       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-k9x9k"
	I1025 09:52:59.313687       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-qffxt"
	I1025 09:52:59.333152       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="186.663209ms"
	I1025 09:52:59.427448       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.224486ms"
	I1025 09:52:59.427869       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="346.337µs"
	I1025 09:52:59.670238       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1025 09:52:59.682235       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-k9x9k"
	I1025 09:52:59.690389       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.594118ms"
	I1025 09:52:59.698511       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.068192ms"
	I1025 09:52:59.698715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.19µs"
	I1025 09:53:12.908476       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="144.635µs"
	I1025 09:53:12.920295       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="143.88µs"
	I1025 09:53:13.380581       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1025 09:53:13.841470       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.834251ms"
	I1025 09:53:13.841567       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.046µs"
	
	
	==> kube-proxy [0108e86e98ed9d0ff460e871a8e7e1c62b8dd1273fed936f5d0b8043b8be064e] <==
	I1025 09:52:59.567241       1 server_others.go:69] "Using iptables proxy"
	I1025 09:52:59.583125       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1025 09:52:59.631483       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:52:59.635142       1 server_others.go:152] "Using iptables Proxier"
	I1025 09:52:59.635192       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 09:52:59.635203       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 09:52:59.635234       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 09:52:59.635561       1 server.go:846] "Version info" version="v1.28.0"
	I1025 09:52:59.635594       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:52:59.636324       1 config.go:188] "Starting service config controller"
	I1025 09:52:59.636380       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 09:52:59.636333       1 config.go:97] "Starting endpoint slice config controller"
	I1025 09:52:59.636449       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 09:52:59.636893       1 config.go:315] "Starting node config controller"
	I1025 09:52:59.636904       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 09:52:59.737018       1 shared_informer.go:318] Caches are synced for node config
	I1025 09:52:59.737028       1 shared_informer.go:318] Caches are synced for service config
	I1025 09:52:59.737076       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a630a21420a6fc6ae397ed13988afd78d0a92c459de340e187090043dbfcb0c5] <==
	W1025 09:52:43.465619       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 09:52:43.466098       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1025 09:52:43.465661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 09:52:43.466475       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1025 09:52:43.465708       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 09:52:43.466547       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1025 09:52:44.283748       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1025 09:52:44.283899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1025 09:52:44.298471       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 09:52:44.298600       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1025 09:52:44.308176       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 09:52:44.308211       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1025 09:52:44.368171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 09:52:44.368220       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1025 09:52:44.475172       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 09:52:44.475222       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1025 09:52:44.547058       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 09:52:44.547101       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1025 09:52:44.577386       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 09:52:44.577428       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:52:44.666426       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 09:52:44.666468       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1025 09:52:44.684623       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 09:52:44.684695       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1025 09:52:47.752997       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 25 09:52:58 old-k8s-version-676314 kubelet[1418]: I1025 09:52:58.312699    1418 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 09:52:59 old-k8s-version-676314 kubelet[1418]: I1025 09:52:59.082061    1418 topology_manager.go:215] "Topology Admit Handler" podUID="00dfddca-e613-4c0d-81ff-90d998264105" podNamespace="kube-system" podName="kube-proxy-bsxx6"
	Oct 25 09:52:59 old-k8s-version-676314 kubelet[1418]: I1025 09:52:59.082274    1418 topology_manager.go:215] "Topology Admit Handler" podUID="99efd308-5c6d-461a-baaa-09017e967973" podNamespace="kube-system" podName="kindnet-5hnxc"
	Oct 25 09:52:59 old-k8s-version-676314 kubelet[1418]: I1025 09:52:59.094865    1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/99efd308-5c6d-461a-baaa-09017e967973-cni-cfg\") pod \"kindnet-5hnxc\" (UID: \"99efd308-5c6d-461a-baaa-09017e967973\") " pod="kube-system/kindnet-5hnxc"
	Oct 25 09:52:59 old-k8s-version-676314 kubelet[1418]: I1025 09:52:59.095160    1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99efd308-5c6d-461a-baaa-09017e967973-xtables-lock\") pod \"kindnet-5hnxc\" (UID: \"99efd308-5c6d-461a-baaa-09017e967973\") " pod="kube-system/kindnet-5hnxc"
	Oct 25 09:52:59 old-k8s-version-676314 kubelet[1418]: I1025 09:52:59.095295    1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99efd308-5c6d-461a-baaa-09017e967973-lib-modules\") pod \"kindnet-5hnxc\" (UID: \"99efd308-5c6d-461a-baaa-09017e967973\") " pod="kube-system/kindnet-5hnxc"
	Oct 25 09:52:59 old-k8s-version-676314 kubelet[1418]: I1025 09:52:59.095630    1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7ws6\" (UniqueName: \"kubernetes.io/projected/99efd308-5c6d-461a-baaa-09017e967973-kube-api-access-v7ws6\") pod \"kindnet-5hnxc\" (UID: \"99efd308-5c6d-461a-baaa-09017e967973\") " pod="kube-system/kindnet-5hnxc"
	Oct 25 09:52:59 old-k8s-version-676314 kubelet[1418]: I1025 09:52:59.095690    1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00dfddca-e613-4c0d-81ff-90d998264105-lib-modules\") pod \"kube-proxy-bsxx6\" (UID: \"00dfddca-e613-4c0d-81ff-90d998264105\") " pod="kube-system/kube-proxy-bsxx6"
	Oct 25 09:52:59 old-k8s-version-676314 kubelet[1418]: I1025 09:52:59.095719    1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/00dfddca-e613-4c0d-81ff-90d998264105-kube-proxy\") pod \"kube-proxy-bsxx6\" (UID: \"00dfddca-e613-4c0d-81ff-90d998264105\") " pod="kube-system/kube-proxy-bsxx6"
	Oct 25 09:52:59 old-k8s-version-676314 kubelet[1418]: I1025 09:52:59.095748    1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00dfddca-e613-4c0d-81ff-90d998264105-xtables-lock\") pod \"kube-proxy-bsxx6\" (UID: \"00dfddca-e613-4c0d-81ff-90d998264105\") " pod="kube-system/kube-proxy-bsxx6"
	Oct 25 09:52:59 old-k8s-version-676314 kubelet[1418]: I1025 09:52:59.095776    1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqzrh\" (UniqueName: \"kubernetes.io/projected/00dfddca-e613-4c0d-81ff-90d998264105-kube-api-access-cqzrh\") pod \"kube-proxy-bsxx6\" (UID: \"00dfddca-e613-4c0d-81ff-90d998264105\") " pod="kube-system/kube-proxy-bsxx6"
	Oct 25 09:53:02 old-k8s-version-676314 kubelet[1418]: I1025 09:53:02.803864    1418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bsxx6" podStartSLOduration=3.8038103469999998 podCreationTimestamp="2025-10-25 09:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:52:59.804890879 +0000 UTC m=+13.191097371" watchObservedRunningTime="2025-10-25 09:53:02.803810347 +0000 UTC m=+16.190016833"
	Oct 25 09:53:02 old-k8s-version-676314 kubelet[1418]: I1025 09:53:02.804060    1418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-5hnxc" podStartSLOduration=2.28816888 podCreationTimestamp="2025-10-25 09:52:58 +0000 UTC" firstStartedPulling="2025-10-25 09:52:59.410282863 +0000 UTC m=+12.796489335" lastFinishedPulling="2025-10-25 09:53:01.926142878 +0000 UTC m=+15.312349355" observedRunningTime="2025-10-25 09:53:02.803599245 +0000 UTC m=+16.189805731" watchObservedRunningTime="2025-10-25 09:53:02.8040289 +0000 UTC m=+16.190235390"
	Oct 25 09:53:12 old-k8s-version-676314 kubelet[1418]: I1025 09:53:12.884221    1418 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 25 09:53:12 old-k8s-version-676314 kubelet[1418]: I1025 09:53:12.908805    1418 topology_manager.go:215] "Topology Admit Handler" podUID="7595a20b-4bea-4b2d-942f-237f464dcf71" podNamespace="kube-system" podName="coredns-5dd5756b68-qffxt"
	Oct 25 09:53:12 old-k8s-version-676314 kubelet[1418]: I1025 09:53:12.910528    1418 topology_manager.go:215] "Topology Admit Handler" podUID="d7e681d4-1c2c-4bf0-ae5b-da945d5d4f59" podNamespace="kube-system" podName="storage-provisioner"
	Oct 25 09:53:13 old-k8s-version-676314 kubelet[1418]: I1025 09:53:13.005105    1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhcbf\" (UniqueName: \"kubernetes.io/projected/d7e681d4-1c2c-4bf0-ae5b-da945d5d4f59-kube-api-access-fhcbf\") pod \"storage-provisioner\" (UID: \"d7e681d4-1c2c-4bf0-ae5b-da945d5d4f59\") " pod="kube-system/storage-provisioner"
	Oct 25 09:53:13 old-k8s-version-676314 kubelet[1418]: I1025 09:53:13.005158    1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d7e681d4-1c2c-4bf0-ae5b-da945d5d4f59-tmp\") pod \"storage-provisioner\" (UID: \"d7e681d4-1c2c-4bf0-ae5b-da945d5d4f59\") " pod="kube-system/storage-provisioner"
	Oct 25 09:53:13 old-k8s-version-676314 kubelet[1418]: I1025 09:53:13.005287    1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7595a20b-4bea-4b2d-942f-237f464dcf71-config-volume\") pod \"coredns-5dd5756b68-qffxt\" (UID: \"7595a20b-4bea-4b2d-942f-237f464dcf71\") " pod="kube-system/coredns-5dd5756b68-qffxt"
	Oct 25 09:53:13 old-k8s-version-676314 kubelet[1418]: I1025 09:53:13.005383    1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56wks\" (UniqueName: \"kubernetes.io/projected/7595a20b-4bea-4b2d-942f-237f464dcf71-kube-api-access-56wks\") pod \"coredns-5dd5756b68-qffxt\" (UID: \"7595a20b-4bea-4b2d-942f-237f464dcf71\") " pod="kube-system/coredns-5dd5756b68-qffxt"
	Oct 25 09:53:13 old-k8s-version-676314 kubelet[1418]: I1025 09:53:13.825476    1418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.825436296 podCreationTimestamp="2025-10-25 09:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:53:13.82506029 +0000 UTC m=+27.211266776" watchObservedRunningTime="2025-10-25 09:53:13.825436296 +0000 UTC m=+27.211642844"
	Oct 25 09:53:13 old-k8s-version-676314 kubelet[1418]: I1025 09:53:13.834665    1418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-qffxt" podStartSLOduration=14.83460963 podCreationTimestamp="2025-10-25 09:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:53:13.834415797 +0000 UTC m=+27.220622285" watchObservedRunningTime="2025-10-25 09:53:13.83460963 +0000 UTC m=+27.220816118"
	Oct 25 09:53:16 old-k8s-version-676314 kubelet[1418]: I1025 09:53:16.080241    1418 topology_manager.go:215] "Topology Admit Handler" podUID="f284177c-1d8d-4d46-8b15-3d8cb988f9d5" podNamespace="default" podName="busybox"
	Oct 25 09:53:16 old-k8s-version-676314 kubelet[1418]: I1025 09:53:16.124759    1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5jvf\" (UniqueName: \"kubernetes.io/projected/f284177c-1d8d-4d46-8b15-3d8cb988f9d5-kube-api-access-c5jvf\") pod \"busybox\" (UID: \"f284177c-1d8d-4d46-8b15-3d8cb988f9d5\") " pod="default/busybox"
	Oct 25 09:53:18 old-k8s-version-676314 kubelet[1418]: I1025 09:53:18.844039    1418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.812694447 podCreationTimestamp="2025-10-25 09:53:16 +0000 UTC" firstStartedPulling="2025-10-25 09:53:16.400941456 +0000 UTC m=+29.787147921" lastFinishedPulling="2025-10-25 09:53:18.432240048 +0000 UTC m=+31.818446531" observedRunningTime="2025-10-25 09:53:18.843990519 +0000 UTC m=+32.230197004" watchObservedRunningTime="2025-10-25 09:53:18.843993057 +0000 UTC m=+32.230199543"
	
	
	==> storage-provisioner [6e19f5c2d024e6a896d113487bd5bc175eb60f7c9143db91a373c41e1665deb4] <==
	I1025 09:53:13.282619       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:53:13.293238       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:53:13.293292       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 09:53:13.301670       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:53:13.301835       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"830beb51-92da-458c-968a-0c40cd8858b2", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-676314_3f9dbbd9-04a6-4572-a2b3-442835a1f7b0 became leader
	I1025 09:53:13.301907       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-676314_3f9dbbd9-04a6-4572-a2b3-442835a1f7b0!
	I1025 09:53:13.403065       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-676314_3f9dbbd9-04a6-4572-a2b3-442835a1f7b0!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-676314 -n old-k8s-version-676314
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-676314 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-042675 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-042675 --alsologtostderr -v=1: exit status 80 (2.368707005s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-042675 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:53:30.950271  437627 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:53:30.950600  437627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:53:30.950616  437627 out.go:374] Setting ErrFile to fd 2...
	I1025 09:53:30.950622  437627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:53:30.950949  437627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:53:30.951265  437627 out.go:368] Setting JSON to false
	I1025 09:53:30.951315  437627 mustload.go:65] Loading cluster: newest-cni-042675
	I1025 09:53:30.951847  437627 config.go:182] Loaded profile config "newest-cni-042675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:53:30.952406  437627 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:53:30.971315  437627 host.go:66] Checking if "newest-cni-042675" exists ...
	I1025 09:53:30.971645  437627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:53:31.040434  437627 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:87 OomKillDisable:false NGoroutines:93 SystemTime:2025-10-25 09:53:31.029330048 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:53:31.041141  437627 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-042675 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 09:53:31.043041  437627 out.go:179] * Pausing node newest-cni-042675 ... 
	I1025 09:53:31.044111  437627 host.go:66] Checking if "newest-cni-042675" exists ...
	I1025 09:53:31.044385  437627 ssh_runner.go:195] Run: systemctl --version
	I1025 09:53:31.044431  437627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:31.066971  437627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:31.169111  437627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:53:31.182644  437627 pause.go:52] kubelet running: true
	I1025 09:53:31.182735  437627 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:53:31.324132  437627 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:53:31.324228  437627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:53:31.392134  437627 cri.go:89] found id: "ab8d5dfecfb639f51b1199df21e177f9c0ef17f03b815319f962da908cf3f139"
	I1025 09:53:31.392158  437627 cri.go:89] found id: "e5376683894476899240a201201c6255b55aba53dc2c98876839e76e1aae5856"
	I1025 09:53:31.392163  437627 cri.go:89] found id: "e09c7f242156e743288e824a75789e841f5e0338224eb02ca3463157dde8fd76"
	I1025 09:53:31.392166  437627 cri.go:89] found id: "c6a20cd0bc60d27b3580719acf9e5a11bd5e671c8382a15ba38ec0beddb7e9f6"
	I1025 09:53:31.392168  437627 cri.go:89] found id: "3ea3c7de539896c9176c40583cd88b28e00fc00fdebf05a360d418da896c2b11"
	I1025 09:53:31.392171  437627 cri.go:89] found id: "deea16b116d1e92886d0803275bb09d578376d1950b22febd0bdacb1321204a0"
	I1025 09:53:31.392174  437627 cri.go:89] found id: ""
	I1025 09:53:31.392214  437627 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:53:31.404370  437627 retry.go:31] will retry after 131.22345ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:53:31Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:53:31.536786  437627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:53:31.550671  437627 pause.go:52] kubelet running: false
	I1025 09:53:31.550732  437627 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:53:31.667825  437627 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:53:31.667941  437627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:53:31.748300  437627 cri.go:89] found id: "ab8d5dfecfb639f51b1199df21e177f9c0ef17f03b815319f962da908cf3f139"
	I1025 09:53:31.748331  437627 cri.go:89] found id: "e5376683894476899240a201201c6255b55aba53dc2c98876839e76e1aae5856"
	I1025 09:53:31.748336  437627 cri.go:89] found id: "e09c7f242156e743288e824a75789e841f5e0338224eb02ca3463157dde8fd76"
	I1025 09:53:31.748340  437627 cri.go:89] found id: "c6a20cd0bc60d27b3580719acf9e5a11bd5e671c8382a15ba38ec0beddb7e9f6"
	I1025 09:53:31.748367  437627 cri.go:89] found id: "3ea3c7de539896c9176c40583cd88b28e00fc00fdebf05a360d418da896c2b11"
	I1025 09:53:31.748372  437627 cri.go:89] found id: "deea16b116d1e92886d0803275bb09d578376d1950b22febd0bdacb1321204a0"
	I1025 09:53:31.748375  437627 cri.go:89] found id: ""
	I1025 09:53:31.748433  437627 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:53:31.765725  437627 retry.go:31] will retry after 201.169833ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:53:31Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:53:31.967099  437627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:53:31.980836  437627 pause.go:52] kubelet running: false
	I1025 09:53:31.980889  437627 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:53:32.098951  437627 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:53:32.099034  437627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:53:32.165449  437627 cri.go:89] found id: "ab8d5dfecfb639f51b1199df21e177f9c0ef17f03b815319f962da908cf3f139"
	I1025 09:53:32.165471  437627 cri.go:89] found id: "e5376683894476899240a201201c6255b55aba53dc2c98876839e76e1aae5856"
	I1025 09:53:32.165477  437627 cri.go:89] found id: "e09c7f242156e743288e824a75789e841f5e0338224eb02ca3463157dde8fd76"
	I1025 09:53:32.165481  437627 cri.go:89] found id: "c6a20cd0bc60d27b3580719acf9e5a11bd5e671c8382a15ba38ec0beddb7e9f6"
	I1025 09:53:32.165485  437627 cri.go:89] found id: "3ea3c7de539896c9176c40583cd88b28e00fc00fdebf05a360d418da896c2b11"
	I1025 09:53:32.165489  437627 cri.go:89] found id: "deea16b116d1e92886d0803275bb09d578376d1950b22febd0bdacb1321204a0"
	I1025 09:53:32.165493  437627 cri.go:89] found id: ""
	I1025 09:53:32.165551  437627 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:53:32.178059  437627 retry.go:31] will retry after 829.046568ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:53:32Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:53:33.007484  437627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:53:33.020570  437627 pause.go:52] kubelet running: false
	I1025 09:53:33.020630  437627 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:53:33.148664  437627 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:53:33.148747  437627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:53:33.227726  437627 cri.go:89] found id: "ab8d5dfecfb639f51b1199df21e177f9c0ef17f03b815319f962da908cf3f139"
	I1025 09:53:33.227750  437627 cri.go:89] found id: "e5376683894476899240a201201c6255b55aba53dc2c98876839e76e1aae5856"
	I1025 09:53:33.227755  437627 cri.go:89] found id: "e09c7f242156e743288e824a75789e841f5e0338224eb02ca3463157dde8fd76"
	I1025 09:53:33.227760  437627 cri.go:89] found id: "c6a20cd0bc60d27b3580719acf9e5a11bd5e671c8382a15ba38ec0beddb7e9f6"
	I1025 09:53:33.227763  437627 cri.go:89] found id: "3ea3c7de539896c9176c40583cd88b28e00fc00fdebf05a360d418da896c2b11"
	I1025 09:53:33.227768  437627 cri.go:89] found id: "deea16b116d1e92886d0803275bb09d578376d1950b22febd0bdacb1321204a0"
	I1025 09:53:33.227772  437627 cri.go:89] found id: ""
	I1025 09:53:33.227817  437627 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:53:33.244557  437627 out.go:203] 
	W1025 09:53:33.245883  437627 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:53:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:53:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:53:33.245918  437627 out.go:285] * 
	* 
	W1025 09:53:33.250432  437627 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:53:33.251474  437627 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-042675 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-042675
helpers_test.go:243: (dbg) docker inspect newest-cni-042675:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce",
	        "Created": "2025-10-25T09:52:44.327443817Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 434810,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:53:20.007467351Z",
	            "FinishedAt": "2025-10-25T09:53:19.147175944Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce/hostname",
	        "HostsPath": "/var/lib/docker/containers/3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce/hosts",
	        "LogPath": "/var/lib/docker/containers/3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce/3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce-json.log",
	        "Name": "/newest-cni-042675",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-042675:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-042675",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce",
	                "LowerDir": "/var/lib/docker/overlay2/22ae4172cbe3c43d98e8b23c6d4928d84d681a598f6ccb09273b14bd2d20ccfb-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/22ae4172cbe3c43d98e8b23c6d4928d84d681a598f6ccb09273b14bd2d20ccfb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/22ae4172cbe3c43d98e8b23c6d4928d84d681a598f6ccb09273b14bd2d20ccfb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/22ae4172cbe3c43d98e8b23c6d4928d84d681a598f6ccb09273b14bd2d20ccfb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-042675",
	                "Source": "/var/lib/docker/volumes/newest-cni-042675/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-042675",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-042675",
	                "name.minikube.sigs.k8s.io": "newest-cni-042675",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "67bda2c4a1b63aa99016ae11dd0274ff1866ba646ca15dd7d464f042cd73746e",
	            "SandboxKey": "/var/run/docker/netns/67bda2c4a1b6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33230"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33231"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33234"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33232"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33233"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-042675": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:ce:ca:2b:85:27",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a3ae4e80fdc178e1b920fe2d5b1786ace400be5b54cd55cc0897dd02ba348996",
	                    "EndpointID": "f8e1040976a3bbb490986b6fdcaafd6ea3f0ac1e115f4cdd11fe0891417b07c6",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-042675",
	                        "3a2253343bb2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-042675 -n newest-cni-042675
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-042675 -n newest-cni-042675: exit status 2 (350.004459ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-042675 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                         │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                    │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p enable-default-cni-035825 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                              │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo cri-dockerd --version                                                                                                                                                                                       │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                         │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-129588                                                                                                                                                                                                                  │ kubernetes-upgrade-129588    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl cat containerd --no-pager                                                                                                                                                                         │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                  │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo cat /etc/containerd/config.toml                                                                                                                                                                             │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo containerd config dump                                                                                                                                                                                      │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl status crio --all --full --no-pager                                                                                                                                                               │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl cat crio --no-pager                                                                                                                                                                               │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                     │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo crio config                                                                                                                                                                                                 │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ start   │ -p default-k8s-diff-port-880773 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │                     │
	│ delete  │ -p enable-default-cni-035825                                                                                                                                                                                                                  │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ start   │ -p newest-cni-042675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-042675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ stop    │ -p newest-cni-042675 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-042675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p newest-cni-042675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-676314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ stop    │ -p old-k8s-version-676314 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ image   │ newest-cni-042675 image list --format=json                                                                                                                                                                                                    │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ pause   │ -p newest-cni-042675 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:53:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:53:19.773511  434603 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:53:19.773777  434603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:53:19.773788  434603 out.go:374] Setting ErrFile to fd 2...
	I1025 09:53:19.773794  434603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:53:19.773993  434603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:53:19.774486  434603 out.go:368] Setting JSON to false
	I1025 09:53:19.775873  434603 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5744,"bootTime":1761380256,"procs":405,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:53:19.775964  434603 start.go:141] virtualization: kvm guest
	I1025 09:53:19.778007  434603 out.go:179] * [newest-cni-042675] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:53:19.779678  434603 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:53:19.779677  434603 notify.go:220] Checking for updates...
	I1025 09:53:19.781023  434603 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:53:19.782319  434603 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:53:19.783610  434603 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:53:19.784802  434603 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:53:19.786012  434603 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:53:19.787722  434603 config.go:182] Loaded profile config "newest-cni-042675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:53:19.788279  434603 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:53:19.811602  434603 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:53:19.811685  434603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:53:19.870733  434603 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-25 09:53:19.859806551 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:53:19.870859  434603 docker.go:318] overlay module found
	I1025 09:53:19.872622  434603 out.go:179] * Using the docker driver based on existing profile
	I1025 09:53:19.873853  434603 start.go:305] selected driver: docker
	I1025 09:53:19.873867  434603 start.go:925] validating driver "docker" against &{Name:newest-cni-042675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-042675 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:53:19.873956  434603 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:53:19.874618  434603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:53:19.933915  434603 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-25 09:53:19.923450647 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:53:19.934265  434603 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 09:53:19.934299  434603 cni.go:84] Creating CNI manager for ""
	I1025 09:53:19.934362  434603 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:53:19.934413  434603 start.go:349] cluster config:
	{Name:newest-cni-042675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-042675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:53:19.936710  434603 out.go:179] * Starting "newest-cni-042675" primary control-plane node in "newest-cni-042675" cluster
	I1025 09:53:19.937738  434603 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:53:19.938786  434603 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:53:19.939853  434603 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:53:19.939887  434603 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:53:19.939903  434603 cache.go:58] Caching tarball of preloaded images
	I1025 09:53:19.939972  434603 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:53:19.939990  434603 preload.go:233] Found /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:53:19.940011  434603 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:53:19.940113  434603 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/config.json ...
	I1025 09:53:19.961461  434603 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:53:19.961488  434603 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:53:19.961504  434603 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:53:19.961533  434603 start.go:360] acquireMachinesLock for newest-cni-042675: {Name:mk7919472b767e9cb704209265f0c08926368ab3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:53:19.961631  434603 start.go:364] duration metric: took 75.533µs to acquireMachinesLock for "newest-cni-042675"
	I1025 09:53:19.961656  434603 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:53:19.961663  434603 fix.go:54] fixHost starting: 
	I1025 09:53:19.961915  434603 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:53:19.979722  434603 fix.go:112] recreateIfNeeded on newest-cni-042675: state=Stopped err=<nil>
	W1025 09:53:19.979756  434603 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 09:53:17.532535  417881 node_ready.go:57] node "no-preload-656799" has "Ready":"False" status (will retry)
	W1025 09:53:20.032444  417881 node_ready.go:57] node "no-preload-656799" has "Ready":"False" status (will retry)
	W1025 09:53:19.550089  423245 node_ready.go:57] node "default-k8s-diff-port-880773" has "Ready":"False" status (will retry)
	W1025 09:53:22.049980  423245 node_ready.go:57] node "default-k8s-diff-port-880773" has "Ready":"False" status (will retry)
	I1025 09:53:19.981655  434603 out.go:252] * Restarting existing docker container for "newest-cni-042675" ...
	I1025 09:53:19.981738  434603 cli_runner.go:164] Run: docker start newest-cni-042675
	I1025 09:53:20.244452  434603 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:53:20.263132  434603 kic.go:430] container "newest-cni-042675" state is running.
	I1025 09:53:20.263663  434603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042675
	I1025 09:53:20.282999  434603 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/config.json ...
	I1025 09:53:20.283222  434603 machine.go:93] provisionDockerMachine start ...
	I1025 09:53:20.283290  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:20.302362  434603 main.go:141] libmachine: Using SSH client type: native
	I1025 09:53:20.302643  434603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33230 <nil> <nil>}
	I1025 09:53:20.302656  434603 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:53:20.303404  434603 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42078->127.0.0.1:33230: read: connection reset by peer
	I1025 09:53:23.445660  434603 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-042675
	
	I1025 09:53:23.445687  434603 ubuntu.go:182] provisioning hostname "newest-cni-042675"
	I1025 09:53:23.445755  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:23.464263  434603 main.go:141] libmachine: Using SSH client type: native
	I1025 09:53:23.464618  434603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33230 <nil> <nil>}
	I1025 09:53:23.464638  434603 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-042675 && echo "newest-cni-042675" | sudo tee /etc/hostname
	I1025 09:53:23.617144  434603 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-042675
	
	I1025 09:53:23.617206  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:23.637112  434603 main.go:141] libmachine: Using SSH client type: native
	I1025 09:53:23.637321  434603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33230 <nil> <nil>}
	I1025 09:53:23.637338  434603 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-042675' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-042675/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-042675' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:53:23.779190  434603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:53:23.779218  434603 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 09:53:23.779240  434603 ubuntu.go:190] setting up certificates
	I1025 09:53:23.779252  434603 provision.go:84] configureAuth start
	I1025 09:53:23.779310  434603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042675
	I1025 09:53:23.797637  434603 provision.go:143] copyHostCerts
	I1025 09:53:23.797724  434603 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem, removing ...
	I1025 09:53:23.797745  434603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:53:23.797826  434603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 09:53:23.797982  434603 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem, removing ...
	I1025 09:53:23.797996  434603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:53:23.798043  434603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 09:53:23.798193  434603 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem, removing ...
	I1025 09:53:23.798205  434603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:53:23.798249  434603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 09:53:23.798339  434603 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.newest-cni-042675 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-042675]
	I1025 09:53:23.990329  434603 provision.go:177] copyRemoteCerts
	I1025 09:53:23.990395  434603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:53:23.990428  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:24.008847  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:24.108756  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:53:24.126966  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 09:53:24.145322  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:53:24.163049  434603 provision.go:87] duration metric: took 383.782933ms to configureAuth
	I1025 09:53:24.163079  434603 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:53:24.163260  434603 config.go:182] Loaded profile config "newest-cni-042675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:53:24.163377  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:24.181273  434603 main.go:141] libmachine: Using SSH client type: native
	I1025 09:53:24.181542  434603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33230 <nil> <nil>}
	I1025 09:53:24.181568  434603 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:53:24.454606  434603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:53:24.454634  434603 machine.go:96] duration metric: took 4.171394685s to provisionDockerMachine
	I1025 09:53:24.454647  434603 start.go:293] postStartSetup for "newest-cni-042675" (driver="docker")
	I1025 09:53:24.454660  434603 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:53:24.454735  434603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:53:24.454788  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:24.475283  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:24.578292  434603 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:53:24.582207  434603 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:53:24.582234  434603 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:53:24.582244  434603 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 09:53:24.582297  434603 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 09:53:24.582430  434603 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> 1341452.pem in /etc/ssl/certs
	I1025 09:53:24.582597  434603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:53:24.590297  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:53:24.608243  434603 start.go:296] duration metric: took 153.577737ms for postStartSetup
	I1025 09:53:24.608370  434603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:53:24.608427  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:24.626241  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:24.724259  434603 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:53:24.729324  434603 fix.go:56] duration metric: took 4.767655075s for fixHost
	I1025 09:53:24.729386  434603 start.go:83] releasing machines lock for "newest-cni-042675", held for 4.767735987s
	I1025 09:53:24.729491  434603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042675
	I1025 09:53:24.748071  434603 ssh_runner.go:195] Run: cat /version.json
	I1025 09:53:24.748139  434603 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:53:24.748223  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:24.748143  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:24.768083  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:24.768476  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:24.925089  434603 ssh_runner.go:195] Run: systemctl --version
	I1025 09:53:24.932469  434603 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:53:24.967868  434603 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:53:24.972763  434603 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:53:24.972823  434603 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:53:24.981758  434603 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:53:24.981780  434603 start.go:495] detecting cgroup driver to use...
	I1025 09:53:24.981813  434603 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:53:24.981878  434603 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:53:24.996854  434603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:53:25.009584  434603 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:53:25.009649  434603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:53:25.023923  434603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:53:25.037770  434603 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:53:25.138823  434603 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:53:25.235687  434603 docker.go:234] disabling docker service ...
	I1025 09:53:25.235757  434603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:53:25.252330  434603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:53:25.267278  434603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:53:25.354057  434603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:53:25.441625  434603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:53:25.456619  434603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:53:25.473379  434603 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:53:25.473454  434603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.484228  434603 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:53:25.484289  434603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.493990  434603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.503472  434603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.513233  434603 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:53:25.522416  434603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.532563  434603 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.541834  434603 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.552718  434603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:53:25.562386  434603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:53:25.570830  434603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:53:25.663161  434603 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:53:25.768763  434603 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:53:25.768828  434603 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:53:25.773115  434603 start.go:563] Will wait 60s for crictl version
	I1025 09:53:25.773178  434603 ssh_runner.go:195] Run: which crictl
	I1025 09:53:25.777144  434603 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:53:25.803639  434603 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:53:25.803716  434603 ssh_runner.go:195] Run: crio --version
	I1025 09:53:25.835380  434603 ssh_runner.go:195] Run: crio --version
	I1025 09:53:25.872240  434603 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:53:25.873393  434603 cli_runner.go:164] Run: docker network inspect newest-cni-042675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:53:25.892746  434603 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 09:53:25.896824  434603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:53:25.908783  434603 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1025 09:53:22.532966  417881 node_ready.go:57] node "no-preload-656799" has "Ready":"False" status (will retry)
	W1025 09:53:25.032303  417881 node_ready.go:57] node "no-preload-656799" has "Ready":"False" status (will retry)
	I1025 09:53:25.910013  434603 kubeadm.go:883] updating cluster {Name:newest-cni-042675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-042675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:53:25.910165  434603 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:53:25.910239  434603 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:53:25.948762  434603 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:53:25.948786  434603 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:53:25.948836  434603 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:53:25.979284  434603 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:53:25.979308  434603 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:53:25.979317  434603 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1025 09:53:25.979449  434603 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-042675 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-042675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:53:25.979509  434603 ssh_runner.go:195] Run: crio config
	I1025 09:53:26.028600  434603 cni.go:84] Creating CNI manager for ""
	I1025 09:53:26.028634  434603 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:53:26.028659  434603 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1025 09:53:26.028692  434603 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-042675 NodeName:newest-cni-042675 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:53:26.028875  434603 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-042675"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:53:26.028953  434603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:53:26.039587  434603 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:53:26.039660  434603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:53:26.050054  434603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 09:53:26.066563  434603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:53:26.082692  434603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 09:53:26.097268  434603 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:53:26.101882  434603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:53:26.112484  434603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:53:26.197224  434603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:53:26.226418  434603 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675 for IP: 192.168.103.2
	I1025 09:53:26.226440  434603 certs.go:195] generating shared ca certs ...
	I1025 09:53:26.226461  434603 certs.go:227] acquiring lock for ca certs: {Name:mk84f00dc0ba6e3a6eb84ff47b0ea60692217fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:53:26.226670  434603 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key
	I1025 09:53:26.226755  434603 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key
	I1025 09:53:26.226772  434603 certs.go:257] generating profile certs ...
	I1025 09:53:26.226903  434603 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/client.key
	I1025 09:53:26.226986  434603 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/apiserver.key.c1b0a430
	I1025 09:53:26.227039  434603 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/proxy-client.key
	I1025 09:53:26.227179  434603 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem (1338 bytes)
	W1025 09:53:26.227218  434603 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145_empty.pem, impossibly tiny 0 bytes
	I1025 09:53:26.227231  434603 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:53:26.227279  434603 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:53:26.227312  434603 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:53:26.227340  434603 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem (1675 bytes)
	I1025 09:53:26.227413  434603 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:53:26.228293  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:53:26.249101  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:53:26.270753  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:53:26.292194  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:53:26.317791  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 09:53:26.340738  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:53:26.359990  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:53:26.379888  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:53:26.398784  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /usr/share/ca-certificates/1341452.pem (1708 bytes)
	I1025 09:53:26.419773  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:53:26.441534  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem --> /usr/share/ca-certificates/134145.pem (1338 bytes)
	I1025 09:53:26.460244  434603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:53:26.475976  434603 ssh_runner.go:195] Run: openssl version
	I1025 09:53:26.483204  434603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1341452.pem && ln -fs /usr/share/ca-certificates/1341452.pem /etc/ssl/certs/1341452.pem"
	I1025 09:53:26.492371  434603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1341452.pem
	I1025 09:53:26.496434  434603 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:05 /usr/share/ca-certificates/1341452.pem
	I1025 09:53:26.496492  434603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1341452.pem
	I1025 09:53:26.539861  434603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1341452.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:53:26.548342  434603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:53:26.557814  434603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:53:26.562177  434603 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:59 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:53:26.562238  434603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:53:26.606014  434603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:53:26.615990  434603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134145.pem && ln -fs /usr/share/ca-certificates/134145.pem /etc/ssl/certs/134145.pem"
	I1025 09:53:26.625112  434603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134145.pem
	I1025 09:53:26.629846  434603 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:05 /usr/share/ca-certificates/134145.pem
	I1025 09:53:26.629902  434603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134145.pem
	I1025 09:53:26.668926  434603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134145.pem /etc/ssl/certs/51391683.0"
	I1025 09:53:26.677940  434603 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:53:26.682068  434603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:53:26.721055  434603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:53:26.770546  434603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:53:26.827155  434603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:53:26.881576  434603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:53:26.944972  434603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:53:26.989855  434603 kubeadm.go:400] StartCluster: {Name:newest-cni-042675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-042675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:53:26.989972  434603 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:53:26.990108  434603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:53:27.022797  434603 cri.go:89] found id: "e09c7f242156e743288e824a75789e841f5e0338224eb02ca3463157dde8fd76"
	I1025 09:53:27.022822  434603 cri.go:89] found id: "c6a20cd0bc60d27b3580719acf9e5a11bd5e671c8382a15ba38ec0beddb7e9f6"
	I1025 09:53:27.022827  434603 cri.go:89] found id: "3ea3c7de539896c9176c40583cd88b28e00fc00fdebf05a360d418da896c2b11"
	I1025 09:53:27.022831  434603 cri.go:89] found id: "deea16b116d1e92886d0803275bb09d578376d1950b22febd0bdacb1321204a0"
	I1025 09:53:27.022836  434603 cri.go:89] found id: ""
	I1025 09:53:27.022887  434603 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:53:27.039831  434603 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:53:27Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:53:27.039926  434603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:53:27.052160  434603 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:53:27.052179  434603 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:53:27.052221  434603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:53:27.060412  434603 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:53:27.061099  434603 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-042675" does not appear in /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:53:27.061595  434603 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-130604/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-042675" cluster setting kubeconfig missing "newest-cni-042675" context setting]
	I1025 09:53:27.062267  434603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:53:27.063722  434603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:53:27.072959  434603 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1025 09:53:27.073000  434603 kubeadm.go:601] duration metric: took 20.814722ms to restartPrimaryControlPlane
	I1025 09:53:27.073011  434603 kubeadm.go:402] duration metric: took 83.192109ms to StartCluster
	I1025 09:53:27.073029  434603 settings.go:142] acquiring lock: {Name:mke1e64be0ec6edf2eef6e52eb10d83b59bb8c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:53:27.073087  434603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:53:27.075413  434603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:53:27.075691  434603 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:53:27.076035  434603 config.go:182] Loaded profile config "newest-cni-042675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:53:27.075986  434603 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:53:27.076119  434603 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-042675"
	I1025 09:53:27.076140  434603 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-042675"
	W1025 09:53:27.076153  434603 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:53:27.076152  434603 addons.go:69] Setting dashboard=true in profile "newest-cni-042675"
	I1025 09:53:27.076179  434603 addons.go:238] Setting addon dashboard=true in "newest-cni-042675"
	I1025 09:53:27.076188  434603 addons.go:69] Setting default-storageclass=true in profile "newest-cni-042675"
	I1025 09:53:27.076180  434603 host.go:66] Checking if "newest-cni-042675" exists ...
	I1025 09:53:27.076214  434603 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-042675"
	W1025 09:53:27.076194  434603 addons.go:247] addon dashboard should already be in state true
	I1025 09:53:27.076249  434603 host.go:66] Checking if "newest-cni-042675" exists ...
	I1025 09:53:27.076573  434603 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:53:27.076725  434603 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:53:27.076751  434603 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:53:27.082934  434603 out.go:179] * Verifying Kubernetes components...
	I1025 09:53:27.084364  434603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:53:27.107571  434603 addons.go:238] Setting addon default-storageclass=true in "newest-cni-042675"
	W1025 09:53:27.107596  434603 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:53:27.107624  434603 host.go:66] Checking if "newest-cni-042675" exists ...
	I1025 09:53:27.108738  434603 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:53:27.110620  434603 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 09:53:27.111893  434603 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:53:27.112969  434603 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 09:53:27.113016  434603 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:53:27.113086  434603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:53:27.113155  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	W1025 09:53:24.050903  423245 node_ready.go:57] node "default-k8s-diff-port-880773" has "Ready":"False" status (will retry)
	W1025 09:53:26.051511  423245 node_ready.go:57] node "default-k8s-diff-port-880773" has "Ready":"False" status (will retry)
	W1025 09:53:28.550564  423245 node_ready.go:57] node "default-k8s-diff-port-880773" has "Ready":"False" status (will retry)
	I1025 09:53:27.114576  434603 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 09:53:27.114595  434603 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 09:53:27.114660  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:27.147600  434603 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:53:27.147689  434603 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:53:27.147769  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:27.156247  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:27.173295  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:27.186712  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:27.270077  434603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:53:27.282865  434603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:53:27.287485  434603 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:53:27.287550  434603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:53:27.304775  434603 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 09:53:27.304806  434603 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 09:53:27.306968  434603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:53:27.325006  434603 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 09:53:27.325032  434603 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 09:53:27.349072  434603 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 09:53:27.349097  434603 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 09:53:27.374734  434603 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 09:53:27.374761  434603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 09:53:27.398177  434603 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 09:53:27.398207  434603 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 09:53:27.418820  434603 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 09:53:27.418847  434603 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 09:53:27.438294  434603 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 09:53:27.438320  434603 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 09:53:27.453991  434603 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 09:53:27.454021  434603 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 09:53:27.470305  434603 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:53:27.470332  434603 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 09:53:27.485940  434603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:53:29.229231  434603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.946330513s)
	I1025 09:53:29.229289  434603 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.941713898s)
	I1025 09:53:29.229324  434603 api_server.go:72] duration metric: took 2.153604964s to wait for apiserver process to appear ...
	I1025 09:53:29.229338  434603 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:53:29.229381  434603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.922353989s)
	I1025 09:53:29.229394  434603 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:53:29.229526  434603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.743547969s)
	I1025 09:53:29.231049  434603 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-042675 addons enable metrics-server
	
	I1025 09:53:29.238036  434603 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:53:29.238061  434603 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:53:29.247036  434603 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1025 09:53:29.248341  434603 addons.go:514] duration metric: took 2.172361914s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1025 09:53:29.729560  434603 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:53:29.735443  434603 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:53:29.735476  434603 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:53:30.229911  434603 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:53:30.234336  434603 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1025 09:53:30.235401  434603 api_server.go:141] control plane version: v1.34.1
	I1025 09:53:30.235424  434603 api_server.go:131] duration metric: took 1.006081483s to wait for apiserver health ...
	I1025 09:53:30.235434  434603 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:53:30.238752  434603 system_pods.go:59] 8 kube-system pods found
	I1025 09:53:30.238784  434603 system_pods.go:61] "coredns-66bc5c9577-v4xpv" [c6b5ed04-03a3-4b67-bd8b-3d0392236861] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 09:53:30.238795  434603 system_pods.go:61] "etcd-newest-cni-042675" [559f055a-4502-4e2e-a28e-096449f29d72] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:53:30.238808  434603 system_pods.go:61] "kindnet-xsn67" [6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 09:53:30.238818  434603 system_pods.go:61] "kube-apiserver-newest-cni-042675" [0be15777-76f2-46e9-b9da-fe0f7a4426a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:53:30.238830  434603 system_pods.go:61] "kube-controller-manager-newest-cni-042675" [8ffd378a-c0d8-4135-a9be-b7532cb0f44c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:53:30.238842  434603 system_pods.go:61] "kube-proxy-468gg" [7360d3df-fd12-429c-b79f-f8a744d0de49] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 09:53:30.238857  434603 system_pods.go:61] "kube-scheduler-newest-cni-042675" [98395f6b-3670-40f9-a7ca-1e9d5c7c0c4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:53:30.238879  434603 system_pods.go:61] "storage-provisioner" [43ce25b5-99bd-4159-9b8c-efd6ca6d159c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 09:53:30.238894  434603 system_pods.go:74] duration metric: took 3.453104ms to wait for pod list to return data ...
	I1025 09:53:30.238907  434603 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:53:30.241219  434603 default_sa.go:45] found service account: "default"
	I1025 09:53:30.241239  434603 default_sa.go:55] duration metric: took 2.325802ms for default service account to be created ...
	I1025 09:53:30.241251  434603 kubeadm.go:586] duration metric: took 3.165532282s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 09:53:30.241270  434603 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:53:30.243465  434603 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:53:30.243487  434603 node_conditions.go:123] node cpu capacity is 8
	I1025 09:53:30.243499  434603 node_conditions.go:105] duration metric: took 2.219256ms to run NodePressure ...
	I1025 09:53:30.243510  434603 start.go:241] waiting for startup goroutines ...
	I1025 09:53:30.243519  434603 start.go:246] waiting for cluster config update ...
	I1025 09:53:30.243531  434603 start.go:255] writing updated cluster config ...
	I1025 09:53:30.243768  434603 ssh_runner.go:195] Run: rm -f paused
	I1025 09:53:30.292236  434603 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:53:30.294042  434603 out.go:179] * Done! kubectl is now configured to use "newest-cni-042675" cluster and "default" namespace by default
	W1025 09:53:27.039520  417881 node_ready.go:57] node "no-preload-656799" has "Ready":"False" status (will retry)
	I1025 09:53:29.032813  417881 node_ready.go:49] node "no-preload-656799" is "Ready"
	I1025 09:53:29.032842  417881 node_ready.go:38] duration metric: took 13.504064788s for node "no-preload-656799" to be "Ready" ...
	I1025 09:53:29.032926  417881 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:53:29.032995  417881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:53:29.056365  417881 api_server.go:72] duration metric: took 13.818057121s to wait for apiserver process to appear ...
	I1025 09:53:29.056408  417881 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:53:29.056428  417881 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 09:53:29.068688  417881 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 09:53:29.069709  417881 api_server.go:141] control plane version: v1.34.1
	I1025 09:53:29.069850  417881 api_server.go:131] duration metric: took 13.420698ms to wait for apiserver health ...
	I1025 09:53:29.069879  417881 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:53:29.075489  417881 system_pods.go:59] 8 kube-system pods found
	I1025 09:53:29.075589  417881 system_pods.go:61] "coredns-66bc5c9577-sw9hv" [b8784813-9a51-43f5-ae3a-d5f9a1cd7d41] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:53:29.075670  417881 system_pods.go:61] "etcd-no-preload-656799" [6568c784-57c2-42e4-9c3f-8f82801d1d97] Running
	I1025 09:53:29.075704  417881 system_pods.go:61] "kindnet-nbj7f" [c4a372bb-2500-4e98-9012-a3076916ffe8] Running
	I1025 09:53:29.075710  417881 system_pods.go:61] "kube-apiserver-no-preload-656799" [b0f0fbe5-c605-4ed7-b1d7-81a2205ef358] Running
	I1025 09:53:29.075717  417881 system_pods.go:61] "kube-controller-manager-no-preload-656799" [380b9304-6bbe-48ef-8e63-1148da5002b8] Running
	I1025 09:53:29.075722  417881 system_pods.go:61] "kube-proxy-gfph2" [150e67b8-c0b3-4e74-a94d-a43506de4a53] Running
	I1025 09:53:29.075727  417881 system_pods.go:61] "kube-scheduler-no-preload-656799" [65fa78d7-800a-4e58-be4e-9b609dfda168] Running
	I1025 09:53:29.075734  417881 system_pods.go:61] "storage-provisioner" [4e4f58ae-a176-4a16-a7ec-035c2170c2c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:53:29.075743  417881 system_pods.go:74] duration metric: took 5.833388ms to wait for pod list to return data ...
	I1025 09:53:29.075808  417881 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:53:29.079761  417881 default_sa.go:45] found service account: "default"
	I1025 09:53:29.079869  417881 default_sa.go:55] duration metric: took 4.040873ms for default service account to be created ...
	I1025 09:53:29.079892  417881 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:53:29.084757  417881 system_pods.go:86] 8 kube-system pods found
	I1025 09:53:29.084786  417881 system_pods.go:89] "coredns-66bc5c9577-sw9hv" [b8784813-9a51-43f5-ae3a-d5f9a1cd7d41] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:53:29.084811  417881 system_pods.go:89] "etcd-no-preload-656799" [6568c784-57c2-42e4-9c3f-8f82801d1d97] Running
	I1025 09:53:29.084819  417881 system_pods.go:89] "kindnet-nbj7f" [c4a372bb-2500-4e98-9012-a3076916ffe8] Running
	I1025 09:53:29.084847  417881 system_pods.go:89] "kube-apiserver-no-preload-656799" [b0f0fbe5-c605-4ed7-b1d7-81a2205ef358] Running
	I1025 09:53:29.084904  417881 system_pods.go:89] "kube-controller-manager-no-preload-656799" [380b9304-6bbe-48ef-8e63-1148da5002b8] Running
	I1025 09:53:29.084912  417881 system_pods.go:89] "kube-proxy-gfph2" [150e67b8-c0b3-4e74-a94d-a43506de4a53] Running
	I1025 09:53:29.084917  417881 system_pods.go:89] "kube-scheduler-no-preload-656799" [65fa78d7-800a-4e58-be4e-9b609dfda168] Running
	I1025 09:53:29.084924  417881 system_pods.go:89] "storage-provisioner" [4e4f58ae-a176-4a16-a7ec-035c2170c2c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:53:29.084984  417881 retry.go:31] will retry after 204.032384ms: missing components: kube-dns
	I1025 09:53:29.294011  417881 system_pods.go:86] 8 kube-system pods found
	I1025 09:53:29.294051  417881 system_pods.go:89] "coredns-66bc5c9577-sw9hv" [b8784813-9a51-43f5-ae3a-d5f9a1cd7d41] Running
	I1025 09:53:29.294060  417881 system_pods.go:89] "etcd-no-preload-656799" [6568c784-57c2-42e4-9c3f-8f82801d1d97] Running
	I1025 09:53:29.294065  417881 system_pods.go:89] "kindnet-nbj7f" [c4a372bb-2500-4e98-9012-a3076916ffe8] Running
	I1025 09:53:29.294070  417881 system_pods.go:89] "kube-apiserver-no-preload-656799" [b0f0fbe5-c605-4ed7-b1d7-81a2205ef358] Running
	I1025 09:53:29.294077  417881 system_pods.go:89] "kube-controller-manager-no-preload-656799" [380b9304-6bbe-48ef-8e63-1148da5002b8] Running
	I1025 09:53:29.294082  417881 system_pods.go:89] "kube-proxy-gfph2" [150e67b8-c0b3-4e74-a94d-a43506de4a53] Running
	I1025 09:53:29.294087  417881 system_pods.go:89] "kube-scheduler-no-preload-656799" [65fa78d7-800a-4e58-be4e-9b609dfda168] Running
	I1025 09:53:29.294099  417881 system_pods.go:89] "storage-provisioner" [4e4f58ae-a176-4a16-a7ec-035c2170c2c3] Running
	I1025 09:53:29.294110  417881 system_pods.go:126] duration metric: took 214.18501ms to wait for k8s-apps to be running ...
	I1025 09:53:29.294120  417881 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:53:29.294183  417881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:53:29.311269  417881 system_svc.go:56] duration metric: took 17.137558ms WaitForService to wait for kubelet
	I1025 09:53:29.311302  417881 kubeadm.go:586] duration metric: took 14.073010823s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:53:29.311325  417881 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:53:29.314883  417881 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:53:29.314914  417881 node_conditions.go:123] node cpu capacity is 8
	I1025 09:53:29.314934  417881 node_conditions.go:105] duration metric: took 3.602613ms to run NodePressure ...
	I1025 09:53:29.314948  417881 start.go:241] waiting for startup goroutines ...
	I1025 09:53:29.314958  417881 start.go:246] waiting for cluster config update ...
	I1025 09:53:29.314971  417881 start.go:255] writing updated cluster config ...
	I1025 09:53:29.315321  417881 ssh_runner.go:195] Run: rm -f paused
	I1025 09:53:29.320099  417881 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:53:29.327433  417881 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sw9hv" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:29.332989  417881 pod_ready.go:94] pod "coredns-66bc5c9577-sw9hv" is "Ready"
	I1025 09:53:29.333026  417881 pod_ready.go:86] duration metric: took 5.566458ms for pod "coredns-66bc5c9577-sw9hv" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:29.335819  417881 pod_ready.go:83] waiting for pod "etcd-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:29.340667  417881 pod_ready.go:94] pod "etcd-no-preload-656799" is "Ready"
	I1025 09:53:29.340690  417881 pod_ready.go:86] duration metric: took 4.845438ms for pod "etcd-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:29.342846  417881 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:29.347152  417881 pod_ready.go:94] pod "kube-apiserver-no-preload-656799" is "Ready"
	I1025 09:53:29.347178  417881 pod_ready.go:86] duration metric: took 4.306484ms for pod "kube-apiserver-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:29.349218  417881 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:29.724531  417881 pod_ready.go:94] pod "kube-controller-manager-no-preload-656799" is "Ready"
	I1025 09:53:29.724561  417881 pod_ready.go:86] duration metric: took 375.320261ms for pod "kube-controller-manager-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:29.924646  417881 pod_ready.go:83] waiting for pod "kube-proxy-gfph2" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:30.325095  417881 pod_ready.go:94] pod "kube-proxy-gfph2" is "Ready"
	I1025 09:53:30.325129  417881 pod_ready.go:86] duration metric: took 400.453623ms for pod "kube-proxy-gfph2" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:30.525016  417881 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:30.925557  417881 pod_ready.go:94] pod "kube-scheduler-no-preload-656799" is "Ready"
	I1025 09:53:30.925618  417881 pod_ready.go:86] duration metric: took 400.564111ms for pod "kube-scheduler-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:30.925633  417881 pod_ready.go:40] duration metric: took 1.605502105s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:53:30.985641  417881 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:53:30.987672  417881 out.go:179] * Done! kubectl is now configured to use "no-preload-656799" cluster and "default" namespace by default
	W1025 09:53:31.050267  423245 node_ready.go:57] node "default-k8s-diff-port-880773" has "Ready":"False" status (will retry)
	W1025 09:53:33.050650  423245 node_ready.go:57] node "default-k8s-diff-port-880773" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.612639878Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.615821193Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=009200a2-7652-4e0d-b3dd-58b2b1d6281f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.616286298Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=497e06b6-abd0-46cc-842f-9fb1c8d79740 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.617649051Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.618309461Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.618628954Z" level=info msg="Ran pod sandbox 97661a1323da4ac1c7cbe7689d685850de12081c98d13bfefa6171c5d4d76d05 with infra container: kube-system/kindnet-xsn67/POD" id=009200a2-7652-4e0d-b3dd-58b2b1d6281f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.619263923Z" level=info msg="Ran pod sandbox 2bf1da1991381329fe43f5cc451cb2a76f334f97e3f008186b4c42564fb97893 with infra container: kube-system/kube-proxy-468gg/POD" id=497e06b6-abd0-46cc-842f-9fb1c8d79740 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.619950808Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=b411ac17-30a5-4248-9782-eade50895709 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.620616352Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=df0b361d-db80-421b-89b5-066596f22737 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.621360548Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9bb14420-eb1f-4551-a93f-e572464bd498 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.62231087Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=02e98db3-f288-4def-bb96-1aab1228f624 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.622807275Z" level=info msg="Creating container: kube-system/kindnet-xsn67/kindnet-cni" id=cbe4dfbc-c16e-4792-b8ad-02e716eea93d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.62291698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.623790572Z" level=info msg="Creating container: kube-system/kube-proxy-468gg/kube-proxy" id=d1ee7522-4ae0-45b8-ae28-080487d3259a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.623896193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.627968376Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.628464635Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.63051951Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.63105895Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.658726062Z" level=info msg="Created container e5376683894476899240a201201c6255b55aba53dc2c98876839e76e1aae5856: kube-system/kindnet-xsn67/kindnet-cni" id=cbe4dfbc-c16e-4792-b8ad-02e716eea93d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.659521011Z" level=info msg="Starting container: e5376683894476899240a201201c6255b55aba53dc2c98876839e76e1aae5856" id=669f6e5d-5b6c-4c2f-858f-5bd03752f0e9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.6614688Z" level=info msg="Created container ab8d5dfecfb639f51b1199df21e177f9c0ef17f03b815319f962da908cf3f139: kube-system/kube-proxy-468gg/kube-proxy" id=d1ee7522-4ae0-45b8-ae28-080487d3259a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.661812267Z" level=info msg="Started container" PID=1047 containerID=e5376683894476899240a201201c6255b55aba53dc2c98876839e76e1aae5856 description=kube-system/kindnet-xsn67/kindnet-cni id=669f6e5d-5b6c-4c2f-858f-5bd03752f0e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=97661a1323da4ac1c7cbe7689d685850de12081c98d13bfefa6171c5d4d76d05
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.662136706Z" level=info msg="Starting container: ab8d5dfecfb639f51b1199df21e177f9c0ef17f03b815319f962da908cf3f139" id=616a5102-01e6-44cb-a568-0e7f18be768e name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.6654511Z" level=info msg="Started container" PID=1048 containerID=ab8d5dfecfb639f51b1199df21e177f9c0ef17f03b815319f962da908cf3f139 description=kube-system/kube-proxy-468gg/kube-proxy id=616a5102-01e6-44cb-a568-0e7f18be768e name=/runtime.v1.RuntimeService/StartContainer sandboxID=2bf1da1991381329fe43f5cc451cb2a76f334f97e3f008186b4c42564fb97893
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ab8d5dfecfb63       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   2bf1da1991381       kube-proxy-468gg                            kube-system
	e537668389447       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   97661a1323da4       kindnet-xsn67                               kube-system
	e09c7f242156e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   e51b90b3a83ce       kube-apiserver-newest-cni-042675            kube-system
	c6a20cd0bc60d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   7234458571dc9       etcd-newest-cni-042675                      kube-system
	3ea3c7de53989       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   16100fd201a00       kube-scheduler-newest-cni-042675            kube-system
	deea16b116d1e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   6d9a8a6b2028f       kube-controller-manager-newest-cni-042675   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-042675
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-042675
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=newest-cni-042675
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_53_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:53:00 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-042675
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:53:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:53:28 +0000   Sat, 25 Oct 2025 09:52:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:53:28 +0000   Sat, 25 Oct 2025 09:52:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:53:28 +0000   Sat, 25 Oct 2025 09:52:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 09:53:28 +0000   Sat, 25 Oct 2025 09:52:57 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-042675
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                967fd215-cebb-4af9-b5cd-64a07c73ec38
	  Boot ID:                    69cac88c-fbae-449a-9884-8eb99653f5b9
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-042675                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-xsn67                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-042675             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-042675    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-468gg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-042675             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 4s    kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node newest-cni-042675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node newest-cni-042675 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node newest-cni-042675 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node newest-cni-042675 event: Registered Node newest-cni-042675 in Controller
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-042675 event: Registered Node newest-cni-042675 in Controller
	
	
	==> dmesg <==
	[  +0.000024] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[Oct25 09:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[ +17.952906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 b8 8e e3 56 c9 08 06
	[  +0.000656] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[Oct25 09:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	[ +20.335832] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +1.293644] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[Oct25 09:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 68 92 7c c6 14 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +0.270958] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a d0 7b 0e 4a 8d 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[ +10.676024] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000020] ll header: 00000000: ff ff ff ff ff ff 1a 10 31 a9 02 ae 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	
	
	==> etcd [c6a20cd0bc60d27b3580719acf9e5a11bd5e671c8382a15ba38ec0beddb7e9f6] <==
	{"level":"warn","ts":"2025-10-25T09:53:27.864076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.872586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.882799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.904529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.920072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.931713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.940786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.948762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.956887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.967771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.976046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.985240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.993503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.000376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.008609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.016250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.023798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.030777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.038651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.045016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.052782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.073385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.081232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.087852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.142018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40410","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:53:34 up  1:35,  0 user,  load average: 5.37, 4.31, 2.60
	Linux newest-cni-042675 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e5376683894476899240a201201c6255b55aba53dc2c98876839e76e1aae5856] <==
	I1025 09:53:29.836342       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:53:29.929138       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1025 09:53:29.929318       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:53:29.929340       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:53:29.929408       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:53:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:53:30.038398       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:53:30.038433       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:53:30.038447       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:53:30.129153       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:53:30.464450       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:53:30.464545       1 metrics.go:72] Registering metrics
	I1025 09:53:30.464653       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [e09c7f242156e743288e824a75789e841f5e0338224eb02ca3463157dde8fd76] <==
	I1025 09:53:28.681848       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:53:28.682932       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:53:28.683007       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:53:28.684701       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 09:53:28.685434       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 09:53:28.685650       1 aggregator.go:171] initial CRD sync complete...
	I1025 09:53:28.685669       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 09:53:28.685677       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:53:28.685682       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:53:28.688280       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 09:53:28.688316       1 policy_source.go:240] refreshing policies
	I1025 09:53:28.713824       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:53:28.732472       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:53:29.008411       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:53:29.065611       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:53:29.093476       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:53:29.101662       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:53:29.115258       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:53:29.163929       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.208.222"}
	I1025 09:53:29.178716       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.240.117"}
	I1025 09:53:29.571041       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:53:32.030522       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:53:32.030569       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:53:32.182215       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:53:32.330974       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [deea16b116d1e92886d0803275bb09d578376d1950b22febd0bdacb1321204a0] <==
	I1025 09:53:31.827770       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:53:31.827784       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:53:31.827788       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:53:31.827808       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 09:53:31.827793       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:53:31.827843       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:53:31.827960       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-042675"
	I1025 09:53:31.828011       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:53:31.828036       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:53:31.828121       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:53:31.828440       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:53:31.828457       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:53:31.829804       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:53:31.829923       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:53:31.832611       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 09:53:31.832678       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 09:53:31.832733       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 09:53:31.832738       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 09:53:31.832742       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 09:53:31.833777       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:53:31.845235       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:53:31.848384       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:53:31.849604       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:53:31.853754       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:53:31.859055       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [ab8d5dfecfb639f51b1199df21e177f9c0ef17f03b815319f962da908cf3f139] <==
	I1025 09:53:29.705682       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:53:29.778228       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:53:29.878950       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:53:29.878989       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1025 09:53:29.879085       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:53:29.897180       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:53:29.897240       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:53:29.902443       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:53:29.902920       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:53:29.902952       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:53:29.904536       1 config.go:200] "Starting service config controller"
	I1025 09:53:29.904568       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:53:29.904603       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:53:29.904611       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:53:29.904602       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:53:29.904633       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:53:29.904644       1 config.go:309] "Starting node config controller"
	I1025 09:53:29.904663       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:53:29.904672       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:53:30.004743       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:53:30.004775       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:53:30.004758       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [3ea3c7de539896c9176c40583cd88b28e00fc00fdebf05a360d418da896c2b11] <==
	I1025 09:53:28.112430       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:53:28.660503       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:53:28.660578       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1025 09:53:28.660596       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:53:28.660624       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:53:28.696887       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:53:28.696914       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:53:28.700733       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:53:28.700828       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:53:28.701163       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:53:28.702261       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:53:28.801569       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: E1025 09:53:28.727071     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-042675\" already exists" pod="kube-system/kube-scheduler-newest-cni-042675"
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: I1025 09:53:28.727105     672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-042675"
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: E1025 09:53:28.735672     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-042675\" already exists" pod="kube-system/etcd-newest-cni-042675"
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: I1025 09:53:28.735717     672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-042675"
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: E1025 09:53:28.742431     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-042675\" already exists" pod="kube-system/kube-apiserver-newest-cni-042675"
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: I1025 09:53:28.742511     672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-042675"
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: E1025 09:53:28.749325     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-042675\" already exists" pod="kube-system/kube-controller-manager-newest-cni-042675"
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: I1025 09:53:28.759771     672 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-042675"
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: I1025 09:53:28.759873     672 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-042675"
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: I1025 09:53:28.759909     672 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: I1025 09:53:28.760836     672 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: I1025 09:53:29.303117     672 apiserver.go:52] "Watching apiserver"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: I1025 09:53:29.306725     672 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: I1025 09:53:29.356255     672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-042675"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: I1025 09:53:29.356402     672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-042675"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: E1025 09:53:29.363255     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-042675\" already exists" pod="kube-system/etcd-newest-cni-042675"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: E1025 09:53:29.363261     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-042675\" already exists" pod="kube-system/kube-apiserver-newest-cni-042675"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: I1025 09:53:29.386577     672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7360d3df-fd12-429c-b79f-f8a744d0de49-xtables-lock\") pod \"kube-proxy-468gg\" (UID: \"7360d3df-fd12-429c-b79f-f8a744d0de49\") " pod="kube-system/kube-proxy-468gg"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: I1025 09:53:29.386614     672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3-lib-modules\") pod \"kindnet-xsn67\" (UID: \"6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3\") " pod="kube-system/kindnet-xsn67"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: I1025 09:53:29.386648     672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7360d3df-fd12-429c-b79f-f8a744d0de49-lib-modules\") pod \"kube-proxy-468gg\" (UID: \"7360d3df-fd12-429c-b79f-f8a744d0de49\") " pod="kube-system/kube-proxy-468gg"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: I1025 09:53:29.386884     672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3-cni-cfg\") pod \"kindnet-xsn67\" (UID: \"6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3\") " pod="kube-system/kindnet-xsn67"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: I1025 09:53:29.386916     672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3-xtables-lock\") pod \"kindnet-xsn67\" (UID: \"6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3\") " pod="kube-system/kindnet-xsn67"
	Oct 25 09:53:31 newest-cni-042675 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:53:31 newest-cni-042675 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:53:31 newest-cni-042675 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-042675 -n newest-cni-042675
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-042675 -n newest-cni-042675: exit status 2 (328.929869ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-042675 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-v4xpv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-xgmks kubernetes-dashboard-855c9754f9-q8khq
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-042675 describe pod coredns-66bc5c9577-v4xpv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-xgmks kubernetes-dashboard-855c9754f9-q8khq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-042675 describe pod coredns-66bc5c9577-v4xpv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-xgmks kubernetes-dashboard-855c9754f9-q8khq: exit status 1 (61.736732ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-v4xpv" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-xgmks" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-q8khq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-042675 describe pod coredns-66bc5c9577-v4xpv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-xgmks kubernetes-dashboard-855c9754f9-q8khq: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-042675
helpers_test.go:243: (dbg) docker inspect newest-cni-042675:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce",
	        "Created": "2025-10-25T09:52:44.327443817Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 434810,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:53:20.007467351Z",
	            "FinishedAt": "2025-10-25T09:53:19.147175944Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce/hostname",
	        "HostsPath": "/var/lib/docker/containers/3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce/hosts",
	        "LogPath": "/var/lib/docker/containers/3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce/3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce-json.log",
	        "Name": "/newest-cni-042675",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-042675:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-042675",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3a2253343bb2aef240f412871a688c402b651ba22bf251595ddf65efbf7739ce",
	                "LowerDir": "/var/lib/docker/overlay2/22ae4172cbe3c43d98e8b23c6d4928d84d681a598f6ccb09273b14bd2d20ccfb-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/22ae4172cbe3c43d98e8b23c6d4928d84d681a598f6ccb09273b14bd2d20ccfb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/22ae4172cbe3c43d98e8b23c6d4928d84d681a598f6ccb09273b14bd2d20ccfb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/22ae4172cbe3c43d98e8b23c6d4928d84d681a598f6ccb09273b14bd2d20ccfb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-042675",
	                "Source": "/var/lib/docker/volumes/newest-cni-042675/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-042675",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-042675",
	                "name.minikube.sigs.k8s.io": "newest-cni-042675",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "67bda2c4a1b63aa99016ae11dd0274ff1866ba646ca15dd7d464f042cd73746e",
	            "SandboxKey": "/var/run/docker/netns/67bda2c4a1b6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33230"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33231"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33234"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33232"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33233"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-042675": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:ce:ca:2b:85:27",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a3ae4e80fdc178e1b920fe2d5b1786ace400be5b54cd55cc0897dd02ba348996",
	                    "EndpointID": "f8e1040976a3bbb490986b6fdcaafd6ea3f0ac1e115f4cdd11fe0891417b07c6",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-042675",
	                        "3a2253343bb2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-042675 -n newest-cni-042675
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-042675 -n newest-cni-042675: exit status 2 (322.340282ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-042675 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                         │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                    │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p enable-default-cni-035825 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                              │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo cri-dockerd --version                                                                                                                                                                                       │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                         │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-129588                                                                                                                                                                                                                  │ kubernetes-upgrade-129588    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl cat containerd --no-pager                                                                                                                                                                         │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                  │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo cat /etc/containerd/config.toml                                                                                                                                                                             │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo containerd config dump                                                                                                                                                                                      │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl status crio --all --full --no-pager                                                                                                                                                               │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl cat crio --no-pager                                                                                                                                                                               │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                     │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo crio config                                                                                                                                                                                                 │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ start   │ -p default-k8s-diff-port-880773 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │                     │
	│ delete  │ -p enable-default-cni-035825                                                                                                                                                                                                                  │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ start   │ -p newest-cni-042675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-042675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ stop    │ -p newest-cni-042675 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-042675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p newest-cni-042675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-676314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ stop    │ -p old-k8s-version-676314 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ image   │ newest-cni-042675 image list --format=json                                                                                                                                                                                                    │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ pause   │ -p newest-cni-042675 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:53:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:53:19.773511  434603 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:53:19.773777  434603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:53:19.773788  434603 out.go:374] Setting ErrFile to fd 2...
	I1025 09:53:19.773794  434603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:53:19.773993  434603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:53:19.774486  434603 out.go:368] Setting JSON to false
	I1025 09:53:19.775873  434603 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5744,"bootTime":1761380256,"procs":405,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:53:19.775964  434603 start.go:141] virtualization: kvm guest
	I1025 09:53:19.778007  434603 out.go:179] * [newest-cni-042675] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:53:19.779678  434603 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:53:19.779677  434603 notify.go:220] Checking for updates...
	I1025 09:53:19.781023  434603 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:53:19.782319  434603 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:53:19.783610  434603 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:53:19.784802  434603 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:53:19.786012  434603 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:53:19.787722  434603 config.go:182] Loaded profile config "newest-cni-042675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:53:19.788279  434603 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:53:19.811602  434603 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:53:19.811685  434603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:53:19.870733  434603 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-25 09:53:19.859806551 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:53:19.870859  434603 docker.go:318] overlay module found
	I1025 09:53:19.872622  434603 out.go:179] * Using the docker driver based on existing profile
	I1025 09:53:19.873853  434603 start.go:305] selected driver: docker
	I1025 09:53:19.873867  434603 start.go:925] validating driver "docker" against &{Name:newest-cni-042675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-042675 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:53:19.873956  434603 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:53:19.874618  434603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:53:19.933915  434603 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-25 09:53:19.923450647 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:53:19.934265  434603 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 09:53:19.934299  434603 cni.go:84] Creating CNI manager for ""
	I1025 09:53:19.934362  434603 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:53:19.934413  434603 start.go:349] cluster config:
	{Name:newest-cni-042675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-042675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:53:19.936710  434603 out.go:179] * Starting "newest-cni-042675" primary control-plane node in "newest-cni-042675" cluster
	I1025 09:53:19.937738  434603 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:53:19.938786  434603 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:53:19.939853  434603 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:53:19.939887  434603 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:53:19.939903  434603 cache.go:58] Caching tarball of preloaded images
	I1025 09:53:19.939972  434603 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:53:19.939990  434603 preload.go:233] Found /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:53:19.940011  434603 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:53:19.940113  434603 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/config.json ...
	I1025 09:53:19.961461  434603 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:53:19.961488  434603 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:53:19.961504  434603 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:53:19.961533  434603 start.go:360] acquireMachinesLock for newest-cni-042675: {Name:mk7919472b767e9cb704209265f0c08926368ab3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:53:19.961631  434603 start.go:364] duration metric: took 75.533µs to acquireMachinesLock for "newest-cni-042675"
	I1025 09:53:19.961656  434603 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:53:19.961663  434603 fix.go:54] fixHost starting: 
	I1025 09:53:19.961915  434603 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:53:19.979722  434603 fix.go:112] recreateIfNeeded on newest-cni-042675: state=Stopped err=<nil>
	W1025 09:53:19.979756  434603 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 09:53:17.532535  417881 node_ready.go:57] node "no-preload-656799" has "Ready":"False" status (will retry)
	W1025 09:53:20.032444  417881 node_ready.go:57] node "no-preload-656799" has "Ready":"False" status (will retry)
	W1025 09:53:19.550089  423245 node_ready.go:57] node "default-k8s-diff-port-880773" has "Ready":"False" status (will retry)
	W1025 09:53:22.049980  423245 node_ready.go:57] node "default-k8s-diff-port-880773" has "Ready":"False" status (will retry)
	I1025 09:53:19.981655  434603 out.go:252] * Restarting existing docker container for "newest-cni-042675" ...
	I1025 09:53:19.981738  434603 cli_runner.go:164] Run: docker start newest-cni-042675
	I1025 09:53:20.244452  434603 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:53:20.263132  434603 kic.go:430] container "newest-cni-042675" state is running.
	I1025 09:53:20.263663  434603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042675
	I1025 09:53:20.282999  434603 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/config.json ...
	I1025 09:53:20.283222  434603 machine.go:93] provisionDockerMachine start ...
	I1025 09:53:20.283290  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:20.302362  434603 main.go:141] libmachine: Using SSH client type: native
	I1025 09:53:20.302643  434603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33230 <nil> <nil>}
	I1025 09:53:20.302656  434603 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:53:20.303404  434603 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42078->127.0.0.1:33230: read: connection reset by peer
	I1025 09:53:23.445660  434603 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-042675
	
	I1025 09:53:23.445687  434603 ubuntu.go:182] provisioning hostname "newest-cni-042675"
	I1025 09:53:23.445755  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:23.464263  434603 main.go:141] libmachine: Using SSH client type: native
	I1025 09:53:23.464618  434603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33230 <nil> <nil>}
	I1025 09:53:23.464638  434603 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-042675 && echo "newest-cni-042675" | sudo tee /etc/hostname
	I1025 09:53:23.617144  434603 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-042675
	
	I1025 09:53:23.617206  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:23.637112  434603 main.go:141] libmachine: Using SSH client type: native
	I1025 09:53:23.637321  434603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33230 <nil> <nil>}
	I1025 09:53:23.637338  434603 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-042675' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-042675/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-042675' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:53:23.779190  434603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:53:23.779218  434603 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 09:53:23.779240  434603 ubuntu.go:190] setting up certificates
	I1025 09:53:23.779252  434603 provision.go:84] configureAuth start
	I1025 09:53:23.779310  434603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042675
	I1025 09:53:23.797637  434603 provision.go:143] copyHostCerts
	I1025 09:53:23.797724  434603 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem, removing ...
	I1025 09:53:23.797745  434603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:53:23.797826  434603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 09:53:23.797982  434603 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem, removing ...
	I1025 09:53:23.797996  434603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:53:23.798043  434603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 09:53:23.798193  434603 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem, removing ...
	I1025 09:53:23.798205  434603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:53:23.798249  434603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 09:53:23.798339  434603 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.newest-cni-042675 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-042675]
	I1025 09:53:23.990329  434603 provision.go:177] copyRemoteCerts
	I1025 09:53:23.990395  434603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:53:23.990428  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:24.008847  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:24.108756  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:53:24.126966  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 09:53:24.145322  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:53:24.163049  434603 provision.go:87] duration metric: took 383.782933ms to configureAuth
	I1025 09:53:24.163079  434603 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:53:24.163260  434603 config.go:182] Loaded profile config "newest-cni-042675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:53:24.163377  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:24.181273  434603 main.go:141] libmachine: Using SSH client type: native
	I1025 09:53:24.181542  434603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33230 <nil> <nil>}
	I1025 09:53:24.181568  434603 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:53:24.454606  434603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:53:24.454634  434603 machine.go:96] duration metric: took 4.171394685s to provisionDockerMachine
	I1025 09:53:24.454647  434603 start.go:293] postStartSetup for "newest-cni-042675" (driver="docker")
	I1025 09:53:24.454660  434603 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:53:24.454735  434603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:53:24.454788  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:24.475283  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:24.578292  434603 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:53:24.582207  434603 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:53:24.582234  434603 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:53:24.582244  434603 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 09:53:24.582297  434603 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 09:53:24.582430  434603 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> 1341452.pem in /etc/ssl/certs
	I1025 09:53:24.582597  434603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:53:24.590297  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:53:24.608243  434603 start.go:296] duration metric: took 153.577737ms for postStartSetup
	I1025 09:53:24.608370  434603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:53:24.608427  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:24.626241  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:24.724259  434603 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:53:24.729324  434603 fix.go:56] duration metric: took 4.767655075s for fixHost
	I1025 09:53:24.729386  434603 start.go:83] releasing machines lock for "newest-cni-042675", held for 4.767735987s
	I1025 09:53:24.729491  434603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-042675
	I1025 09:53:24.748071  434603 ssh_runner.go:195] Run: cat /version.json
	I1025 09:53:24.748139  434603 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:53:24.748223  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:24.748143  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:24.768083  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:24.768476  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:24.925089  434603 ssh_runner.go:195] Run: systemctl --version
	I1025 09:53:24.932469  434603 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:53:24.967868  434603 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:53:24.972763  434603 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:53:24.972823  434603 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:53:24.981758  434603 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:53:24.981780  434603 start.go:495] detecting cgroup driver to use...
	I1025 09:53:24.981813  434603 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:53:24.981878  434603 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:53:24.996854  434603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:53:25.009584  434603 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:53:25.009649  434603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:53:25.023923  434603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:53:25.037770  434603 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:53:25.138823  434603 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:53:25.235687  434603 docker.go:234] disabling docker service ...
	I1025 09:53:25.235757  434603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:53:25.252330  434603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:53:25.267278  434603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:53:25.354057  434603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:53:25.441625  434603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:53:25.456619  434603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:53:25.473379  434603 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:53:25.473454  434603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.484228  434603 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:53:25.484289  434603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.493990  434603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.503472  434603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.513233  434603 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:53:25.522416  434603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.532563  434603 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.541834  434603 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:53:25.552718  434603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:53:25.562386  434603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:53:25.570830  434603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:53:25.663161  434603 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:53:25.768763  434603 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:53:25.768828  434603 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:53:25.773115  434603 start.go:563] Will wait 60s for crictl version
	I1025 09:53:25.773178  434603 ssh_runner.go:195] Run: which crictl
	I1025 09:53:25.777144  434603 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:53:25.803639  434603 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:53:25.803716  434603 ssh_runner.go:195] Run: crio --version
	I1025 09:53:25.835380  434603 ssh_runner.go:195] Run: crio --version
	I1025 09:53:25.872240  434603 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:53:25.873393  434603 cli_runner.go:164] Run: docker network inspect newest-cni-042675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:53:25.892746  434603 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 09:53:25.896824  434603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:53:25.908783  434603 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1025 09:53:22.532966  417881 node_ready.go:57] node "no-preload-656799" has "Ready":"False" status (will retry)
	W1025 09:53:25.032303  417881 node_ready.go:57] node "no-preload-656799" has "Ready":"False" status (will retry)
	I1025 09:53:25.910013  434603 kubeadm.go:883] updating cluster {Name:newest-cni-042675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-042675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:53:25.910165  434603 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:53:25.910239  434603 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:53:25.948762  434603 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:53:25.948786  434603 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:53:25.948836  434603 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:53:25.979284  434603 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:53:25.979308  434603 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:53:25.979317  434603 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1025 09:53:25.979449  434603 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-042675 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-042675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:53:25.979509  434603 ssh_runner.go:195] Run: crio config
	I1025 09:53:26.028600  434603 cni.go:84] Creating CNI manager for ""
	I1025 09:53:26.028634  434603 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:53:26.028659  434603 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1025 09:53:26.028692  434603 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-042675 NodeName:newest-cni-042675 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:53:26.028875  434603 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-042675"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:53:26.028953  434603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:53:26.039587  434603 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:53:26.039660  434603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:53:26.050054  434603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 09:53:26.066563  434603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:53:26.082692  434603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 09:53:26.097268  434603 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:53:26.101882  434603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:53:26.112484  434603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:53:26.197224  434603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:53:26.226418  434603 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675 for IP: 192.168.103.2
	I1025 09:53:26.226440  434603 certs.go:195] generating shared ca certs ...
	I1025 09:53:26.226461  434603 certs.go:227] acquiring lock for ca certs: {Name:mk84f00dc0ba6e3a6eb84ff47b0ea60692217fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:53:26.226670  434603 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key
	I1025 09:53:26.226755  434603 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key
	I1025 09:53:26.226772  434603 certs.go:257] generating profile certs ...
	I1025 09:53:26.226903  434603 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/client.key
	I1025 09:53:26.226986  434603 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/apiserver.key.c1b0a430
	I1025 09:53:26.227039  434603 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/proxy-client.key
	I1025 09:53:26.227179  434603 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem (1338 bytes)
	W1025 09:53:26.227218  434603 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145_empty.pem, impossibly tiny 0 bytes
	I1025 09:53:26.227231  434603 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:53:26.227279  434603 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:53:26.227312  434603 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:53:26.227340  434603 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem (1675 bytes)
	I1025 09:53:26.227413  434603 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:53:26.228293  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:53:26.249101  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:53:26.270753  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:53:26.292194  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:53:26.317791  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 09:53:26.340738  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:53:26.359990  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:53:26.379888  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/newest-cni-042675/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:53:26.398784  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /usr/share/ca-certificates/1341452.pem (1708 bytes)
	I1025 09:53:26.419773  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:53:26.441534  434603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem --> /usr/share/ca-certificates/134145.pem (1338 bytes)
	I1025 09:53:26.460244  434603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:53:26.475976  434603 ssh_runner.go:195] Run: openssl version
	I1025 09:53:26.483204  434603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1341452.pem && ln -fs /usr/share/ca-certificates/1341452.pem /etc/ssl/certs/1341452.pem"
	I1025 09:53:26.492371  434603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1341452.pem
	I1025 09:53:26.496434  434603 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:05 /usr/share/ca-certificates/1341452.pem
	I1025 09:53:26.496492  434603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1341452.pem
	I1025 09:53:26.539861  434603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1341452.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:53:26.548342  434603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:53:26.557814  434603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:53:26.562177  434603 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:59 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:53:26.562238  434603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:53:26.606014  434603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:53:26.615990  434603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134145.pem && ln -fs /usr/share/ca-certificates/134145.pem /etc/ssl/certs/134145.pem"
	I1025 09:53:26.625112  434603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134145.pem
	I1025 09:53:26.629846  434603 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:05 /usr/share/ca-certificates/134145.pem
	I1025 09:53:26.629902  434603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134145.pem
	I1025 09:53:26.668926  434603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134145.pem /etc/ssl/certs/51391683.0"
	I1025 09:53:26.677940  434603 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:53:26.682068  434603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:53:26.721055  434603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:53:26.770546  434603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:53:26.827155  434603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:53:26.881576  434603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:53:26.944972  434603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:53:26.989855  434603 kubeadm.go:400] StartCluster: {Name:newest-cni-042675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-042675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:53:26.989972  434603 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:53:26.990108  434603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:53:27.022797  434603 cri.go:89] found id: "e09c7f242156e743288e824a75789e841f5e0338224eb02ca3463157dde8fd76"
	I1025 09:53:27.022822  434603 cri.go:89] found id: "c6a20cd0bc60d27b3580719acf9e5a11bd5e671c8382a15ba38ec0beddb7e9f6"
	I1025 09:53:27.022827  434603 cri.go:89] found id: "3ea3c7de539896c9176c40583cd88b28e00fc00fdebf05a360d418da896c2b11"
	I1025 09:53:27.022831  434603 cri.go:89] found id: "deea16b116d1e92886d0803275bb09d578376d1950b22febd0bdacb1321204a0"
	I1025 09:53:27.022836  434603 cri.go:89] found id: ""
	I1025 09:53:27.022887  434603 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:53:27.039831  434603 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:53:27Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:53:27.039926  434603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:53:27.052160  434603 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:53:27.052179  434603 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:53:27.052221  434603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:53:27.060412  434603 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:53:27.061099  434603 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-042675" does not appear in /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:53:27.061595  434603 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-130604/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-042675" cluster setting kubeconfig missing "newest-cni-042675" context setting]
	I1025 09:53:27.062267  434603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:53:27.063722  434603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:53:27.072959  434603 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1025 09:53:27.073000  434603 kubeadm.go:601] duration metric: took 20.814722ms to restartPrimaryControlPlane
	I1025 09:53:27.073011  434603 kubeadm.go:402] duration metric: took 83.192109ms to StartCluster
	I1025 09:53:27.073029  434603 settings.go:142] acquiring lock: {Name:mke1e64be0ec6edf2eef6e52eb10d83b59bb8c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:53:27.073087  434603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:53:27.075413  434603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:53:27.075691  434603 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:53:27.076035  434603 config.go:182] Loaded profile config "newest-cni-042675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:53:27.075986  434603 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:53:27.076119  434603 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-042675"
	I1025 09:53:27.076140  434603 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-042675"
	W1025 09:53:27.076153  434603 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:53:27.076152  434603 addons.go:69] Setting dashboard=true in profile "newest-cni-042675"
	I1025 09:53:27.076179  434603 addons.go:238] Setting addon dashboard=true in "newest-cni-042675"
	I1025 09:53:27.076188  434603 addons.go:69] Setting default-storageclass=true in profile "newest-cni-042675"
	I1025 09:53:27.076180  434603 host.go:66] Checking if "newest-cni-042675" exists ...
	I1025 09:53:27.076214  434603 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-042675"
	W1025 09:53:27.076194  434603 addons.go:247] addon dashboard should already be in state true
	I1025 09:53:27.076249  434603 host.go:66] Checking if "newest-cni-042675" exists ...
	I1025 09:53:27.076573  434603 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:53:27.076725  434603 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:53:27.076751  434603 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:53:27.082934  434603 out.go:179] * Verifying Kubernetes components...
	I1025 09:53:27.084364  434603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:53:27.107571  434603 addons.go:238] Setting addon default-storageclass=true in "newest-cni-042675"
	W1025 09:53:27.107596  434603 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:53:27.107624  434603 host.go:66] Checking if "newest-cni-042675" exists ...
	I1025 09:53:27.108738  434603 cli_runner.go:164] Run: docker container inspect newest-cni-042675 --format={{.State.Status}}
	I1025 09:53:27.110620  434603 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 09:53:27.111893  434603 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:53:27.112969  434603 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 09:53:27.113016  434603 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:53:27.113086  434603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:53:27.113155  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	W1025 09:53:24.050903  423245 node_ready.go:57] node "default-k8s-diff-port-880773" has "Ready":"False" status (will retry)
	W1025 09:53:26.051511  423245 node_ready.go:57] node "default-k8s-diff-port-880773" has "Ready":"False" status (will retry)
	W1025 09:53:28.550564  423245 node_ready.go:57] node "default-k8s-diff-port-880773" has "Ready":"False" status (will retry)
	I1025 09:53:27.114576  434603 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 09:53:27.114595  434603 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 09:53:27.114660  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:27.147600  434603 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:53:27.147689  434603 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:53:27.147769  434603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-042675
	I1025 09:53:27.156247  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:27.173295  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:27.186712  434603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33230 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/newest-cni-042675/id_rsa Username:docker}
	I1025 09:53:27.270077  434603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:53:27.282865  434603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:53:27.287485  434603 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:53:27.287550  434603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:53:27.304775  434603 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 09:53:27.304806  434603 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 09:53:27.306968  434603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:53:27.325006  434603 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 09:53:27.325032  434603 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 09:53:27.349072  434603 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 09:53:27.349097  434603 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 09:53:27.374734  434603 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 09:53:27.374761  434603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 09:53:27.398177  434603 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 09:53:27.398207  434603 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 09:53:27.418820  434603 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 09:53:27.418847  434603 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 09:53:27.438294  434603 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 09:53:27.438320  434603 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 09:53:27.453991  434603 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 09:53:27.454021  434603 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 09:53:27.470305  434603 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:53:27.470332  434603 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 09:53:27.485940  434603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:53:29.229231  434603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.946330513s)
	I1025 09:53:29.229289  434603 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.941713898s)
	I1025 09:53:29.229324  434603 api_server.go:72] duration metric: took 2.153604964s to wait for apiserver process to appear ...
	I1025 09:53:29.229338  434603 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:53:29.229381  434603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.922353989s)
	I1025 09:53:29.229394  434603 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:53:29.229526  434603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.743547969s)
	I1025 09:53:29.231049  434603 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-042675 addons enable metrics-server
	
	I1025 09:53:29.238036  434603 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:53:29.238061  434603 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:53:29.247036  434603 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1025 09:53:29.248341  434603 addons.go:514] duration metric: took 2.172361914s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1025 09:53:29.729560  434603 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:53:29.735443  434603 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:53:29.735476  434603 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:53:30.229911  434603 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:53:30.234336  434603 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1025 09:53:30.235401  434603 api_server.go:141] control plane version: v1.34.1
	I1025 09:53:30.235424  434603 api_server.go:131] duration metric: took 1.006081483s to wait for apiserver health ...
	I1025 09:53:30.235434  434603 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:53:30.238752  434603 system_pods.go:59] 8 kube-system pods found
	I1025 09:53:30.238784  434603 system_pods.go:61] "coredns-66bc5c9577-v4xpv" [c6b5ed04-03a3-4b67-bd8b-3d0392236861] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 09:53:30.238795  434603 system_pods.go:61] "etcd-newest-cni-042675" [559f055a-4502-4e2e-a28e-096449f29d72] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:53:30.238808  434603 system_pods.go:61] "kindnet-xsn67" [6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 09:53:30.238818  434603 system_pods.go:61] "kube-apiserver-newest-cni-042675" [0be15777-76f2-46e9-b9da-fe0f7a4426a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:53:30.238830  434603 system_pods.go:61] "kube-controller-manager-newest-cni-042675" [8ffd378a-c0d8-4135-a9be-b7532cb0f44c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:53:30.238842  434603 system_pods.go:61] "kube-proxy-468gg" [7360d3df-fd12-429c-b79f-f8a744d0de49] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 09:53:30.238857  434603 system_pods.go:61] "kube-scheduler-newest-cni-042675" [98395f6b-3670-40f9-a7ca-1e9d5c7c0c4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:53:30.238879  434603 system_pods.go:61] "storage-provisioner" [43ce25b5-99bd-4159-9b8c-efd6ca6d159c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 09:53:30.238894  434603 system_pods.go:74] duration metric: took 3.453104ms to wait for pod list to return data ...
	I1025 09:53:30.238907  434603 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:53:30.241219  434603 default_sa.go:45] found service account: "default"
	I1025 09:53:30.241239  434603 default_sa.go:55] duration metric: took 2.325802ms for default service account to be created ...
	I1025 09:53:30.241251  434603 kubeadm.go:586] duration metric: took 3.165532282s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 09:53:30.241270  434603 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:53:30.243465  434603 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:53:30.243487  434603 node_conditions.go:123] node cpu capacity is 8
	I1025 09:53:30.243499  434603 node_conditions.go:105] duration metric: took 2.219256ms to run NodePressure ...
	I1025 09:53:30.243510  434603 start.go:241] waiting for startup goroutines ...
	I1025 09:53:30.243519  434603 start.go:246] waiting for cluster config update ...
	I1025 09:53:30.243531  434603 start.go:255] writing updated cluster config ...
	I1025 09:53:30.243768  434603 ssh_runner.go:195] Run: rm -f paused
	I1025 09:53:30.292236  434603 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:53:30.294042  434603 out.go:179] * Done! kubectl is now configured to use "newest-cni-042675" cluster and "default" namespace by default
	W1025 09:53:27.039520  417881 node_ready.go:57] node "no-preload-656799" has "Ready":"False" status (will retry)
	I1025 09:53:29.032813  417881 node_ready.go:49] node "no-preload-656799" is "Ready"
	I1025 09:53:29.032842  417881 node_ready.go:38] duration metric: took 13.504064788s for node "no-preload-656799" to be "Ready" ...
	I1025 09:53:29.032926  417881 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:53:29.032995  417881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:53:29.056365  417881 api_server.go:72] duration metric: took 13.818057121s to wait for apiserver process to appear ...
	I1025 09:53:29.056408  417881 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:53:29.056428  417881 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 09:53:29.068688  417881 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 09:53:29.069709  417881 api_server.go:141] control plane version: v1.34.1
	I1025 09:53:29.069850  417881 api_server.go:131] duration metric: took 13.420698ms to wait for apiserver health ...
	I1025 09:53:29.069879  417881 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:53:29.075489  417881 system_pods.go:59] 8 kube-system pods found
	I1025 09:53:29.075589  417881 system_pods.go:61] "coredns-66bc5c9577-sw9hv" [b8784813-9a51-43f5-ae3a-d5f9a1cd7d41] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:53:29.075670  417881 system_pods.go:61] "etcd-no-preload-656799" [6568c784-57c2-42e4-9c3f-8f82801d1d97] Running
	I1025 09:53:29.075704  417881 system_pods.go:61] "kindnet-nbj7f" [c4a372bb-2500-4e98-9012-a3076916ffe8] Running
	I1025 09:53:29.075710  417881 system_pods.go:61] "kube-apiserver-no-preload-656799" [b0f0fbe5-c605-4ed7-b1d7-81a2205ef358] Running
	I1025 09:53:29.075717  417881 system_pods.go:61] "kube-controller-manager-no-preload-656799" [380b9304-6bbe-48ef-8e63-1148da5002b8] Running
	I1025 09:53:29.075722  417881 system_pods.go:61] "kube-proxy-gfph2" [150e67b8-c0b3-4e74-a94d-a43506de4a53] Running
	I1025 09:53:29.075727  417881 system_pods.go:61] "kube-scheduler-no-preload-656799" [65fa78d7-800a-4e58-be4e-9b609dfda168] Running
	I1025 09:53:29.075734  417881 system_pods.go:61] "storage-provisioner" [4e4f58ae-a176-4a16-a7ec-035c2170c2c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:53:29.075743  417881 system_pods.go:74] duration metric: took 5.833388ms to wait for pod list to return data ...
	I1025 09:53:29.075808  417881 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:53:29.079761  417881 default_sa.go:45] found service account: "default"
	I1025 09:53:29.079869  417881 default_sa.go:55] duration metric: took 4.040873ms for default service account to be created ...
	I1025 09:53:29.079892  417881 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:53:29.084757  417881 system_pods.go:86] 8 kube-system pods found
	I1025 09:53:29.084786  417881 system_pods.go:89] "coredns-66bc5c9577-sw9hv" [b8784813-9a51-43f5-ae3a-d5f9a1cd7d41] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:53:29.084811  417881 system_pods.go:89] "etcd-no-preload-656799" [6568c784-57c2-42e4-9c3f-8f82801d1d97] Running
	I1025 09:53:29.084819  417881 system_pods.go:89] "kindnet-nbj7f" [c4a372bb-2500-4e98-9012-a3076916ffe8] Running
	I1025 09:53:29.084847  417881 system_pods.go:89] "kube-apiserver-no-preload-656799" [b0f0fbe5-c605-4ed7-b1d7-81a2205ef358] Running
	I1025 09:53:29.084904  417881 system_pods.go:89] "kube-controller-manager-no-preload-656799" [380b9304-6bbe-48ef-8e63-1148da5002b8] Running
	I1025 09:53:29.084912  417881 system_pods.go:89] "kube-proxy-gfph2" [150e67b8-c0b3-4e74-a94d-a43506de4a53] Running
	I1025 09:53:29.084917  417881 system_pods.go:89] "kube-scheduler-no-preload-656799" [65fa78d7-800a-4e58-be4e-9b609dfda168] Running
	I1025 09:53:29.084924  417881 system_pods.go:89] "storage-provisioner" [4e4f58ae-a176-4a16-a7ec-035c2170c2c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:53:29.084984  417881 retry.go:31] will retry after 204.032384ms: missing components: kube-dns
	I1025 09:53:29.294011  417881 system_pods.go:86] 8 kube-system pods found
	I1025 09:53:29.294051  417881 system_pods.go:89] "coredns-66bc5c9577-sw9hv" [b8784813-9a51-43f5-ae3a-d5f9a1cd7d41] Running
	I1025 09:53:29.294060  417881 system_pods.go:89] "etcd-no-preload-656799" [6568c784-57c2-42e4-9c3f-8f82801d1d97] Running
	I1025 09:53:29.294065  417881 system_pods.go:89] "kindnet-nbj7f" [c4a372bb-2500-4e98-9012-a3076916ffe8] Running
	I1025 09:53:29.294070  417881 system_pods.go:89] "kube-apiserver-no-preload-656799" [b0f0fbe5-c605-4ed7-b1d7-81a2205ef358] Running
	I1025 09:53:29.294077  417881 system_pods.go:89] "kube-controller-manager-no-preload-656799" [380b9304-6bbe-48ef-8e63-1148da5002b8] Running
	I1025 09:53:29.294082  417881 system_pods.go:89] "kube-proxy-gfph2" [150e67b8-c0b3-4e74-a94d-a43506de4a53] Running
	I1025 09:53:29.294087  417881 system_pods.go:89] "kube-scheduler-no-preload-656799" [65fa78d7-800a-4e58-be4e-9b609dfda168] Running
	I1025 09:53:29.294099  417881 system_pods.go:89] "storage-provisioner" [4e4f58ae-a176-4a16-a7ec-035c2170c2c3] Running
	I1025 09:53:29.294110  417881 system_pods.go:126] duration metric: took 214.18501ms to wait for k8s-apps to be running ...
	I1025 09:53:29.294120  417881 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:53:29.294183  417881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:53:29.311269  417881 system_svc.go:56] duration metric: took 17.137558ms WaitForService to wait for kubelet
	I1025 09:53:29.311302  417881 kubeadm.go:586] duration metric: took 14.073010823s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:53:29.311325  417881 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:53:29.314883  417881 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:53:29.314914  417881 node_conditions.go:123] node cpu capacity is 8
	I1025 09:53:29.314934  417881 node_conditions.go:105] duration metric: took 3.602613ms to run NodePressure ...
	I1025 09:53:29.314948  417881 start.go:241] waiting for startup goroutines ...
	I1025 09:53:29.314958  417881 start.go:246] waiting for cluster config update ...
	I1025 09:53:29.314971  417881 start.go:255] writing updated cluster config ...
	I1025 09:53:29.315321  417881 ssh_runner.go:195] Run: rm -f paused
	I1025 09:53:29.320099  417881 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:53:29.327433  417881 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sw9hv" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:29.332989  417881 pod_ready.go:94] pod "coredns-66bc5c9577-sw9hv" is "Ready"
	I1025 09:53:29.333026  417881 pod_ready.go:86] duration metric: took 5.566458ms for pod "coredns-66bc5c9577-sw9hv" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:29.335819  417881 pod_ready.go:83] waiting for pod "etcd-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:29.340667  417881 pod_ready.go:94] pod "etcd-no-preload-656799" is "Ready"
	I1025 09:53:29.340690  417881 pod_ready.go:86] duration metric: took 4.845438ms for pod "etcd-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:29.342846  417881 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:29.347152  417881 pod_ready.go:94] pod "kube-apiserver-no-preload-656799" is "Ready"
	I1025 09:53:29.347178  417881 pod_ready.go:86] duration metric: took 4.306484ms for pod "kube-apiserver-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:29.349218  417881 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:29.724531  417881 pod_ready.go:94] pod "kube-controller-manager-no-preload-656799" is "Ready"
	I1025 09:53:29.724561  417881 pod_ready.go:86] duration metric: took 375.320261ms for pod "kube-controller-manager-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:29.924646  417881 pod_ready.go:83] waiting for pod "kube-proxy-gfph2" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:30.325095  417881 pod_ready.go:94] pod "kube-proxy-gfph2" is "Ready"
	I1025 09:53:30.325129  417881 pod_ready.go:86] duration metric: took 400.453623ms for pod "kube-proxy-gfph2" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:30.525016  417881 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:30.925557  417881 pod_ready.go:94] pod "kube-scheduler-no-preload-656799" is "Ready"
	I1025 09:53:30.925618  417881 pod_ready.go:86] duration metric: took 400.564111ms for pod "kube-scheduler-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:53:30.925633  417881 pod_ready.go:40] duration metric: took 1.605502105s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:53:30.985641  417881 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:53:30.987672  417881 out.go:179] * Done! kubectl is now configured to use "no-preload-656799" cluster and "default" namespace by default
	W1025 09:53:31.050267  423245 node_ready.go:57] node "default-k8s-diff-port-880773" has "Ready":"False" status (will retry)
	W1025 09:53:33.050650  423245 node_ready.go:57] node "default-k8s-diff-port-880773" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.612639878Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.615821193Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=009200a2-7652-4e0d-b3dd-58b2b1d6281f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.616286298Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=497e06b6-abd0-46cc-842f-9fb1c8d79740 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.617649051Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.618309461Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.618628954Z" level=info msg="Ran pod sandbox 97661a1323da4ac1c7cbe7689d685850de12081c98d13bfefa6171c5d4d76d05 with infra container: kube-system/kindnet-xsn67/POD" id=009200a2-7652-4e0d-b3dd-58b2b1d6281f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.619263923Z" level=info msg="Ran pod sandbox 2bf1da1991381329fe43f5cc451cb2a76f334f97e3f008186b4c42564fb97893 with infra container: kube-system/kube-proxy-468gg/POD" id=497e06b6-abd0-46cc-842f-9fb1c8d79740 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.619950808Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=b411ac17-30a5-4248-9782-eade50895709 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.620616352Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=df0b361d-db80-421b-89b5-066596f22737 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.621360548Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9bb14420-eb1f-4551-a93f-e572464bd498 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.62231087Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=02e98db3-f288-4def-bb96-1aab1228f624 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.622807275Z" level=info msg="Creating container: kube-system/kindnet-xsn67/kindnet-cni" id=cbe4dfbc-c16e-4792-b8ad-02e716eea93d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.62291698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.623790572Z" level=info msg="Creating container: kube-system/kube-proxy-468gg/kube-proxy" id=d1ee7522-4ae0-45b8-ae28-080487d3259a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.623896193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.627968376Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.628464635Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.63051951Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.63105895Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.658726062Z" level=info msg="Created container e5376683894476899240a201201c6255b55aba53dc2c98876839e76e1aae5856: kube-system/kindnet-xsn67/kindnet-cni" id=cbe4dfbc-c16e-4792-b8ad-02e716eea93d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.659521011Z" level=info msg="Starting container: e5376683894476899240a201201c6255b55aba53dc2c98876839e76e1aae5856" id=669f6e5d-5b6c-4c2f-858f-5bd03752f0e9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.6614688Z" level=info msg="Created container ab8d5dfecfb639f51b1199df21e177f9c0ef17f03b815319f962da908cf3f139: kube-system/kube-proxy-468gg/kube-proxy" id=d1ee7522-4ae0-45b8-ae28-080487d3259a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.661812267Z" level=info msg="Started container" PID=1047 containerID=e5376683894476899240a201201c6255b55aba53dc2c98876839e76e1aae5856 description=kube-system/kindnet-xsn67/kindnet-cni id=669f6e5d-5b6c-4c2f-858f-5bd03752f0e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=97661a1323da4ac1c7cbe7689d685850de12081c98d13bfefa6171c5d4d76d05
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.662136706Z" level=info msg="Starting container: ab8d5dfecfb639f51b1199df21e177f9c0ef17f03b815319f962da908cf3f139" id=616a5102-01e6-44cb-a568-0e7f18be768e name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:53:29 newest-cni-042675 crio[521]: time="2025-10-25T09:53:29.6654511Z" level=info msg="Started container" PID=1048 containerID=ab8d5dfecfb639f51b1199df21e177f9c0ef17f03b815319f962da908cf3f139 description=kube-system/kube-proxy-468gg/kube-proxy id=616a5102-01e6-44cb-a568-0e7f18be768e name=/runtime.v1.RuntimeService/StartContainer sandboxID=2bf1da1991381329fe43f5cc451cb2a76f334f97e3f008186b4c42564fb97893
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ab8d5dfecfb63       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   2bf1da1991381       kube-proxy-468gg                            kube-system
	e537668389447       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   97661a1323da4       kindnet-xsn67                               kube-system
	e09c7f242156e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 seconds ago       Running             kube-apiserver            1                   e51b90b3a83ce       kube-apiserver-newest-cni-042675            kube-system
	c6a20cd0bc60d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 seconds ago       Running             etcd                      1                   7234458571dc9       etcd-newest-cni-042675                      kube-system
	3ea3c7de53989       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 seconds ago       Running             kube-scheduler            1                   16100fd201a00       kube-scheduler-newest-cni-042675            kube-system
	deea16b116d1e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 seconds ago       Running             kube-controller-manager   1                   6d9a8a6b2028f       kube-controller-manager-newest-cni-042675   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-042675
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-042675
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=newest-cni-042675
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_53_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:53:00 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-042675
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:53:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:53:28 +0000   Sat, 25 Oct 2025 09:52:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:53:28 +0000   Sat, 25 Oct 2025 09:52:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:53:28 +0000   Sat, 25 Oct 2025 09:52:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 09:53:28 +0000   Sat, 25 Oct 2025 09:52:57 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-042675
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                967fd215-cebb-4af9-b5cd-64a07c73ec38
	  Boot ID:                    69cac88c-fbae-449a-9884-8eb99653f5b9
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-042675                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-xsn67                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-042675             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-newest-cni-042675    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-468gg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-042675             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 6s    kube-proxy       
	  Normal  Starting                 34s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s   kubelet          Node newest-cni-042675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s   kubelet          Node newest-cni-042675 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s   kubelet          Node newest-cni-042675 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node newest-cni-042675 event: Registered Node newest-cni-042675 in Controller
	  Normal  RegisteredNode           5s    node-controller  Node newest-cni-042675 event: Registered Node newest-cni-042675 in Controller
	
	
	==> dmesg <==
	[  +0.000024] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[Oct25 09:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[ +17.952906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 b8 8e e3 56 c9 08 06
	[  +0.000656] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[Oct25 09:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	[ +20.335832] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +1.293644] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[Oct25 09:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 68 92 7c c6 14 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +0.270958] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a d0 7b 0e 4a 8d 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[ +10.676024] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000020] ll header: 00000000: ff ff ff ff ff ff 1a 10 31 a9 02 ae 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	
	
	==> etcd [c6a20cd0bc60d27b3580719acf9e5a11bd5e671c8382a15ba38ec0beddb7e9f6] <==
	{"level":"warn","ts":"2025-10-25T09:53:27.864076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.872586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.882799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.904529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.920072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.931713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.940786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.948762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.956887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.967771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.976046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.985240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:27.993503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.000376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.008609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.016250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.023798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.030777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.038651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.045016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.052782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.073385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.081232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.087852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:28.142018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40410","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:53:36 up  1:35,  0 user,  load average: 5.37, 4.31, 2.60
	Linux newest-cni-042675 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e5376683894476899240a201201c6255b55aba53dc2c98876839e76e1aae5856] <==
	I1025 09:53:29.836342       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:53:29.929138       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1025 09:53:29.929318       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:53:29.929340       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:53:29.929408       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:53:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:53:30.038398       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:53:30.038433       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:53:30.038447       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:53:30.129153       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:53:30.464450       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:53:30.464545       1 metrics.go:72] Registering metrics
	I1025 09:53:30.464653       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [e09c7f242156e743288e824a75789e841f5e0338224eb02ca3463157dde8fd76] <==
	I1025 09:53:28.681848       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:53:28.682932       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:53:28.683007       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:53:28.684701       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 09:53:28.685434       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 09:53:28.685650       1 aggregator.go:171] initial CRD sync complete...
	I1025 09:53:28.685669       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 09:53:28.685677       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:53:28.685682       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:53:28.688280       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 09:53:28.688316       1 policy_source.go:240] refreshing policies
	I1025 09:53:28.713824       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:53:28.732472       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:53:29.008411       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:53:29.065611       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:53:29.093476       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:53:29.101662       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:53:29.115258       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:53:29.163929       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.208.222"}
	I1025 09:53:29.178716       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.240.117"}
	I1025 09:53:29.571041       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:53:32.030522       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:53:32.030569       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:53:32.182215       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:53:32.330974       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [deea16b116d1e92886d0803275bb09d578376d1950b22febd0bdacb1321204a0] <==
	I1025 09:53:31.827770       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:53:31.827784       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:53:31.827788       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:53:31.827808       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 09:53:31.827793       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:53:31.827843       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:53:31.827960       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-042675"
	I1025 09:53:31.828011       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:53:31.828036       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:53:31.828121       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:53:31.828440       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:53:31.828457       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:53:31.829804       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:53:31.829923       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:53:31.832611       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 09:53:31.832678       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 09:53:31.832733       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 09:53:31.832738       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 09:53:31.832742       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 09:53:31.833777       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:53:31.845235       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:53:31.848384       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:53:31.849604       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:53:31.853754       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:53:31.859055       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [ab8d5dfecfb639f51b1199df21e177f9c0ef17f03b815319f962da908cf3f139] <==
	I1025 09:53:29.705682       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:53:29.778228       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:53:29.878950       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:53:29.878989       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1025 09:53:29.879085       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:53:29.897180       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:53:29.897240       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:53:29.902443       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:53:29.902920       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:53:29.902952       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:53:29.904536       1 config.go:200] "Starting service config controller"
	I1025 09:53:29.904568       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:53:29.904603       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:53:29.904611       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:53:29.904602       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:53:29.904633       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:53:29.904644       1 config.go:309] "Starting node config controller"
	I1025 09:53:29.904663       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:53:29.904672       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:53:30.004743       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:53:30.004775       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:53:30.004758       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [3ea3c7de539896c9176c40583cd88b28e00fc00fdebf05a360d418da896c2b11] <==
	I1025 09:53:28.112430       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:53:28.660503       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:53:28.660578       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1025 09:53:28.660596       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:53:28.660624       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:53:28.696887       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:53:28.696914       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:53:28.700733       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:53:28.700828       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:53:28.701163       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:53:28.702261       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:53:28.801569       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: E1025 09:53:28.727071     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-042675\" already exists" pod="kube-system/kube-scheduler-newest-cni-042675"
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: I1025 09:53:28.727105     672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-042675"
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: E1025 09:53:28.735672     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-042675\" already exists" pod="kube-system/etcd-newest-cni-042675"
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: I1025 09:53:28.735717     672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-042675"
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: E1025 09:53:28.742431     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-042675\" already exists" pod="kube-system/kube-apiserver-newest-cni-042675"
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: I1025 09:53:28.742511     672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-042675"
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: E1025 09:53:28.749325     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-042675\" already exists" pod="kube-system/kube-controller-manager-newest-cni-042675"
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: I1025 09:53:28.759771     672 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-042675"
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: I1025 09:53:28.759873     672 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-042675"
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: I1025 09:53:28.759909     672 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 25 09:53:28 newest-cni-042675 kubelet[672]: I1025 09:53:28.760836     672 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: I1025 09:53:29.303117     672 apiserver.go:52] "Watching apiserver"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: I1025 09:53:29.306725     672 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: I1025 09:53:29.356255     672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-042675"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: I1025 09:53:29.356402     672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-042675"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: E1025 09:53:29.363255     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-042675\" already exists" pod="kube-system/etcd-newest-cni-042675"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: E1025 09:53:29.363261     672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-042675\" already exists" pod="kube-system/kube-apiserver-newest-cni-042675"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: I1025 09:53:29.386577     672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7360d3df-fd12-429c-b79f-f8a744d0de49-xtables-lock\") pod \"kube-proxy-468gg\" (UID: \"7360d3df-fd12-429c-b79f-f8a744d0de49\") " pod="kube-system/kube-proxy-468gg"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: I1025 09:53:29.386614     672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3-lib-modules\") pod \"kindnet-xsn67\" (UID: \"6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3\") " pod="kube-system/kindnet-xsn67"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: I1025 09:53:29.386648     672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7360d3df-fd12-429c-b79f-f8a744d0de49-lib-modules\") pod \"kube-proxy-468gg\" (UID: \"7360d3df-fd12-429c-b79f-f8a744d0de49\") " pod="kube-system/kube-proxy-468gg"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: I1025 09:53:29.386884     672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3-cni-cfg\") pod \"kindnet-xsn67\" (UID: \"6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3\") " pod="kube-system/kindnet-xsn67"
	Oct 25 09:53:29 newest-cni-042675 kubelet[672]: I1025 09:53:29.386916     672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3-xtables-lock\") pod \"kindnet-xsn67\" (UID: \"6f35cbac-8a8e-440e-a467-4d9f0a6ac0b3\") " pod="kube-system/kindnet-xsn67"
	Oct 25 09:53:31 newest-cni-042675 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:53:31 newest-cni-042675 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:53:31 newest-cni-042675 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-042675 -n newest-cni-042675
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-042675 -n newest-cni-042675: exit status 2 (327.006105ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-042675 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-v4xpv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-xgmks kubernetes-dashboard-855c9754f9-q8khq
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-042675 describe pod coredns-66bc5c9577-v4xpv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-xgmks kubernetes-dashboard-855c9754f9-q8khq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-042675 describe pod coredns-66bc5c9577-v4xpv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-xgmks kubernetes-dashboard-855c9754f9-q8khq: exit status 1 (63.725173ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-v4xpv" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-xgmks" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-q8khq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-042675 describe pod coredns-66bc5c9577-v4xpv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-xgmks kubernetes-dashboard-855c9754f9-q8khq: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-656799 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-656799 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (264.351269ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:53:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-656799 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-656799 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-656799 describe deploy/metrics-server -n kube-system: exit status 1 (64.19771ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-656799 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-656799
helpers_test.go:243: (dbg) docker inspect no-preload-656799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3",
	        "Created": "2025-10-25T09:52:29.632041057Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 420051,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:52:30.207227081Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3/hosts",
	        "LogPath": "/var/lib/docker/containers/8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3/8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3-json.log",
	        "Name": "/no-preload-656799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-656799:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-656799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3",
	                "LowerDir": "/var/lib/docker/overlay2/02618d7f775b19d8209d62a9f9c27036442b89e111a2465ca1e3390ba980e37b-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/02618d7f775b19d8209d62a9f9c27036442b89e111a2465ca1e3390ba980e37b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/02618d7f775b19d8209d62a9f9c27036442b89e111a2465ca1e3390ba980e37b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/02618d7f775b19d8209d62a9f9c27036442b89e111a2465ca1e3390ba980e37b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-656799",
	                "Source": "/var/lib/docker/volumes/no-preload-656799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-656799",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-656799",
	                "name.minikube.sigs.k8s.io": "no-preload-656799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cdaac9cb1a31430364f21ed3c5ee0b92aaa5247eb9d4b6446661364988e56162",
	            "SandboxKey": "/var/run/docker/netns/cdaac9cb1a31",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33210"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33211"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33214"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33212"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33213"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-656799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:5a:b3:4d:e0:68",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c5f8d7127b2abc1fa122a07d1a58513d1f998c751b6e0894b37ec014b426c376",
	                    "EndpointID": "ebe85c684f16d41454ff53a53e88bff06a09bd786a199f3943e85e757f665d84",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-656799",
	                        "8ccea090eb6c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-656799 -n no-preload-656799
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-656799 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-656799 logs -n 25: (1.076290641s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p kubernetes-upgrade-129588                                                                                                                                                                                                                  │ kubernetes-upgrade-129588    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl cat containerd --no-pager                                                                                                                                                                         │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                  │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo cat /etc/containerd/config.toml                                                                                                                                                                             │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo containerd config dump                                                                                                                                                                                      │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl status crio --all --full --no-pager                                                                                                                                                               │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl cat crio --no-pager                                                                                                                                                                               │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                     │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo crio config                                                                                                                                                                                                 │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ start   │ -p default-k8s-diff-port-880773 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │                     │
	│ delete  │ -p enable-default-cni-035825                                                                                                                                                                                                                  │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ start   │ -p newest-cni-042675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-042675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ stop    │ -p newest-cni-042675 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-042675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p newest-cni-042675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-676314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ stop    │ -p old-k8s-version-676314 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ image   │ newest-cni-042675 image list --format=json                                                                                                                                                                                                    │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ pause   │ -p newest-cni-042675 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ delete  │ -p newest-cni-042675                                                                                                                                                                                                                          │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p no-preload-656799 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ delete  │ -p newest-cni-042675                                                                                                                                                                                                                          │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ delete  │ -p disable-driver-mounts-001549                                                                                                                                                                                                               │ disable-driver-mounts-001549 │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p embed-certs-846915 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:53:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:53:39.763085  440020 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:53:39.763432  440020 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:53:39.763448  440020 out.go:374] Setting ErrFile to fd 2...
	I1025 09:53:39.763454  440020 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:53:39.763702  440020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:53:39.764162  440020 out.go:368] Setting JSON to false
	I1025 09:53:39.765338  440020 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5764,"bootTime":1761380256,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:53:39.765454  440020 start.go:141] virtualization: kvm guest
	I1025 09:53:39.767294  440020 out.go:179] * [embed-certs-846915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:53:39.768515  440020 notify.go:220] Checking for updates...
	I1025 09:53:39.768528  440020 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:53:39.769615  440020 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:53:39.770895  440020 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:53:39.771985  440020 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:53:39.773306  440020 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:53:39.774524  440020 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:53:39.776068  440020 config.go:182] Loaded profile config "default-k8s-diff-port-880773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:53:39.776181  440020 config.go:182] Loaded profile config "no-preload-656799": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:53:39.776281  440020 config.go:182] Loaded profile config "old-k8s-version-676314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 09:53:39.776411  440020 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:53:39.799902  440020 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:53:39.800025  440020 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:53:39.864511  440020 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-25 09:53:39.85199233 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:53:39.864685  440020 docker.go:318] overlay module found
	I1025 09:53:39.866779  440020 out.go:179] * Using the docker driver based on user configuration
	I1025 09:53:39.867973  440020 start.go:305] selected driver: docker
	I1025 09:53:39.867996  440020 start.go:925] validating driver "docker" against <nil>
	I1025 09:53:39.868014  440020 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:53:39.868931  440020 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:53:39.930166  440020 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-25 09:53:39.919193104 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:53:39.930422  440020 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:53:39.930724  440020 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:53:39.932469  440020 out.go:179] * Using Docker driver with root privileges
	I1025 09:53:39.933626  440020 cni.go:84] Creating CNI manager for ""
	I1025 09:53:39.933684  440020 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:53:39.933694  440020 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:53:39.933776  440020 start.go:349] cluster config:
	{Name:embed-certs-846915 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:53:39.935051  440020 out.go:179] * Starting "embed-certs-846915" primary control-plane node in "embed-certs-846915" cluster
	I1025 09:53:39.936145  440020 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:53:39.937383  440020 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:53:39.938498  440020 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:53:39.938528  440020 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:53:39.938540  440020 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:53:39.938564  440020 cache.go:58] Caching tarball of preloaded images
	I1025 09:53:39.938669  440020 preload.go:233] Found /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:53:39.938684  440020 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:53:39.938787  440020 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/config.json ...
	I1025 09:53:39.938806  440020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/config.json: {Name:mk9609be3babe386e83d191f1d79f75a8ab1f7a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:53:39.960689  440020 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:53:39.960711  440020 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:53:39.960727  440020 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:53:39.960753  440020 start.go:360] acquireMachinesLock for embed-certs-846915: {Name:mk6afaad62774c341d106d1a8d37743a274e5cb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:53:39.960842  440020 start.go:364] duration metric: took 74.641µs to acquireMachinesLock for "embed-certs-846915"
	I1025 09:53:39.960868  440020 start.go:93] Provisioning new machine with config: &{Name:embed-certs-846915 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:53:39.960951  440020 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 25 09:53:29 no-preload-656799 crio[769]: time="2025-10-25T09:53:29.00374574Z" level=info msg="Started container" PID=2847 containerID=d69ea4a09075701d8a6c6235620e79f02aa6ffea5a312f1dfd78a8fb9b63b647 description=kube-system/storage-provisioner/storage-provisioner id=c3bc01d9-a7ad-47e6-bd43-822c25712977 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a479907500eb400c20aa65aa1f8a3e8b5abbd6a705ae922b74362bb05b18ca17
	Oct 25 09:53:29 no-preload-656799 crio[769]: time="2025-10-25T09:53:29.009321444Z" level=info msg="Started container" PID=2848 containerID=df3df43b472cbd70735904baf4ad4da5572f226ef6424ed266a019a52626ffa5 description=kube-system/coredns-66bc5c9577-sw9hv/coredns id=d56e2122-3711-4974-abc8-4faf64f1ed99 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f10f139eaa42a71bc6c149e9b9e42fcc1f78c636085cf723d88ed7277e130d4b
	Oct 25 09:53:31 no-preload-656799 crio[769]: time="2025-10-25T09:53:31.453941352Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d8d2e5a0-504b-403e-8c85-04d3fb13b273 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:53:31 no-preload-656799 crio[769]: time="2025-10-25T09:53:31.454068429Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:31 no-preload-656799 crio[769]: time="2025-10-25T09:53:31.459011148Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:077c663c9833493f25a6a2034975cb03eba8738eb5bbec0095e01b49ee2a3cbc UID:e58484e4-93ad-4c1e-af87-8034efb88486 NetNS:/var/run/netns/dace1bcd-8645-4c9a-b134-0742f64f84bd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000552988}] Aliases:map[]}"
	Oct 25 09:53:31 no-preload-656799 crio[769]: time="2025-10-25T09:53:31.4590389Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:53:31 no-preload-656799 crio[769]: time="2025-10-25T09:53:31.469331732Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:077c663c9833493f25a6a2034975cb03eba8738eb5bbec0095e01b49ee2a3cbc UID:e58484e4-93ad-4c1e-af87-8034efb88486 NetNS:/var/run/netns/dace1bcd-8645-4c9a-b134-0742f64f84bd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000552988}] Aliases:map[]}"
	Oct 25 09:53:31 no-preload-656799 crio[769]: time="2025-10-25T09:53:31.469524995Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 09:53:31 no-preload-656799 crio[769]: time="2025-10-25T09:53:31.470280315Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:53:31 no-preload-656799 crio[769]: time="2025-10-25T09:53:31.471125782Z" level=info msg="Ran pod sandbox 077c663c9833493f25a6a2034975cb03eba8738eb5bbec0095e01b49ee2a3cbc with infra container: default/busybox/POD" id=d8d2e5a0-504b-403e-8c85-04d3fb13b273 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:53:31 no-preload-656799 crio[769]: time="2025-10-25T09:53:31.472247182Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=adfd49dc-115a-4a55-a065-18879615982a name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:31 no-preload-656799 crio[769]: time="2025-10-25T09:53:31.472370056Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=adfd49dc-115a-4a55-a065-18879615982a name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:31 no-preload-656799 crio[769]: time="2025-10-25T09:53:31.472416678Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=adfd49dc-115a-4a55-a065-18879615982a name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:31 no-preload-656799 crio[769]: time="2025-10-25T09:53:31.472987491Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1e247f13-6f32-4e62-a748-93cbea89ae9e name=/runtime.v1.ImageService/PullImage
	Oct 25 09:53:31 no-preload-656799 crio[769]: time="2025-10-25T09:53:31.474411085Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 09:53:33 no-preload-656799 crio[769]: time="2025-10-25T09:53:33.434613552Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=1e247f13-6f32-4e62-a748-93cbea89ae9e name=/runtime.v1.ImageService/PullImage
	Oct 25 09:53:33 no-preload-656799 crio[769]: time="2025-10-25T09:53:33.435212985Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=702360b9-326d-45ee-b849-dbb77a424b91 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:33 no-preload-656799 crio[769]: time="2025-10-25T09:53:33.436542444Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=665b29ab-b5fe-4d71-9522-4255627e49ed name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:33 no-preload-656799 crio[769]: time="2025-10-25T09:53:33.441502801Z" level=info msg="Creating container: default/busybox/busybox" id=09cdcb62-93d2-4018-a522-fdd64a731e0b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:53:33 no-preload-656799 crio[769]: time="2025-10-25T09:53:33.441615376Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:33 no-preload-656799 crio[769]: time="2025-10-25T09:53:33.445442787Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:33 no-preload-656799 crio[769]: time="2025-10-25T09:53:33.445859762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:33 no-preload-656799 crio[769]: time="2025-10-25T09:53:33.472174573Z" level=info msg="Created container 5f3a986d565ce718d44974b1aeef5bfe8903784c2b817b67566ff082e7da1363: default/busybox/busybox" id=09cdcb62-93d2-4018-a522-fdd64a731e0b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:53:33 no-preload-656799 crio[769]: time="2025-10-25T09:53:33.47285146Z" level=info msg="Starting container: 5f3a986d565ce718d44974b1aeef5bfe8903784c2b817b67566ff082e7da1363" id=a8e5c6e2-6ba5-41ce-870a-be1ddf202602 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:53:33 no-preload-656799 crio[769]: time="2025-10-25T09:53:33.474600747Z" level=info msg="Started container" PID=2926 containerID=5f3a986d565ce718d44974b1aeef5bfe8903784c2b817b67566ff082e7da1363 description=default/busybox/busybox id=a8e5c6e2-6ba5-41ce-870a-be1ddf202602 name=/runtime.v1.RuntimeService/StartContainer sandboxID=077c663c9833493f25a6a2034975cb03eba8738eb5bbec0095e01b49ee2a3cbc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	5f3a986d565ce       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   077c663c98334       busybox                                     default
	df3df43b472cb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   f10f139eaa42a       coredns-66bc5c9577-sw9hv                    kube-system
	d69ea4a090757       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   a479907500eb4       storage-provisioner                         kube-system
	d8dd6696a4bb4       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   af0d9ee2733a6       kindnet-nbj7f                               kube-system
	f24211d30df72       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      25 seconds ago      Running             kube-proxy                0                   0591b94dde801       kube-proxy-gfph2                            kube-system
	74c34a03974a3       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      36 seconds ago      Running             kube-scheduler            0                   8c3687034b0a5       kube-scheduler-no-preload-656799            kube-system
	529e878c8548e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      36 seconds ago      Running             kube-apiserver            0                   8044c17da7269       kube-apiserver-no-preload-656799            kube-system
	b6319461edbaf       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      36 seconds ago      Running             kube-controller-manager   0                   97d83f907a3a8       kube-controller-manager-no-preload-656799   kube-system
	9ba6b2a3d716f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      36 seconds ago      Running             etcd                      0                   f67a391303436       etcd-no-preload-656799                      kube-system
	
	
	==> coredns [df3df43b472cbd70735904baf4ad4da5572f226ef6424ed266a019a52626ffa5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33489 - 27997 "HINFO IN 6224583677150673309.4574981129872839737. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020271478s
	
	
	==> describe nodes <==
	Name:               no-preload-656799
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-656799
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=no-preload-656799
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_53_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:53:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-656799
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:53:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:53:40 +0000   Sat, 25 Oct 2025 09:53:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:53:40 +0000   Sat, 25 Oct 2025 09:53:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:53:40 +0000   Sat, 25 Oct 2025 09:53:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:53:40 +0000   Sat, 25 Oct 2025 09:53:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-656799
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                5bcc7607-4d30-49cf-9ec1-c2712dc2e9c1
	  Boot ID:                    69cac88c-fbae-449a-9884-8eb99653f5b9
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-sw9hv                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-656799                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-nbj7f                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-656799             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-no-preload-656799    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-gfph2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-656799             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node no-preload-656799 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node no-preload-656799 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node no-preload-656799 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node no-preload-656799 event: Registered Node no-preload-656799 in Controller
	  Normal  NodeReady                12s   kubelet          Node no-preload-656799 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000024] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[Oct25 09:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[ +17.952906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 b8 8e e3 56 c9 08 06
	[  +0.000656] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[Oct25 09:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	[ +20.335832] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +1.293644] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[Oct25 09:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 68 92 7c c6 14 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +0.270958] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a d0 7b 0e 4a 8d 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[ +10.676024] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000020] ll header: 00000000: ff ff ff ff ff ff 1a 10 31 a9 02 ae 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	
	
	==> etcd [9ba6b2a3d716fda66ae6b5998916deefd9bdfe7262c0d4918a99987e36a155b9] <==
	{"level":"warn","ts":"2025-10-25T09:53:06.898330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:06.905460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:06.913828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:06.920942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:06.927042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:06.933933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:06.949933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:06.956555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:06.963773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:06.971442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:06.992651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:06.999658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:07.006701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:07.013044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:07.019105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:07.025612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:07.032734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:07.039663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:07.046204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:07.054562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:07.060923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:07.080342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:07.088431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:07.095817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:07.143923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55742","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:53:40 up  1:36,  0 user,  load average: 4.94, 4.24, 2.59
	Linux no-preload-656799 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d8dd6696a4bb4dfedf14e8c360bd74843c0e933d1809ad4c5f6f4bd8a62950f8] <==
	I1025 09:53:17.970484       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:53:17.970815       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 09:53:17.970978       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:53:17.970992       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:53:17.971012       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:53:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:53:18.171683       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:53:18.171746       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:53:18.171756       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:53:18.265335       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:53:18.572902       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:53:18.572933       1 metrics.go:72] Registering metrics
	I1025 09:53:18.572989       1 controller.go:711] "Syncing nftables rules"
	I1025 09:53:28.176425       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:53:28.176494       1 main.go:301] handling current node
	I1025 09:53:38.172214       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:53:38.172257       1 main.go:301] handling current node
	
	
	==> kube-apiserver [529e878c8548e42c348711a1fcce8ee92d9635c8083ced74d9071223dcf7121d] <==
	I1025 09:53:07.617509       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:53:07.618911       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:53:07.621692       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 09:53:07.621853       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:53:07.625558       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:53:07.626168       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:53:07.809491       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:53:08.522055       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 09:53:08.526650       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 09:53:08.526667       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:53:09.020784       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:53:09.066030       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:53:09.126723       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 09:53:09.132615       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1025 09:53:09.133623       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:53:09.137899       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:53:09.536229       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:53:10.179443       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:53:10.189807       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 09:53:10.197244       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:53:14.536893       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1025 09:53:15.389296       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:53:15.639179       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:53:15.644338       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1025 09:53:39.238165       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:55928: use of closed network connection
	
	
	==> kube-controller-manager [b6319461edbaff2d1603ed8bbf59eab7566bc255acc2b24acba258bd507d48c7] <==
	I1025 09:53:14.501470       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:53:14.533420       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:53:14.533435       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:53:14.533545       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:53:14.533592       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:53:14.534787       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:53:14.534809       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:53:14.534844       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:53:14.534845       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:53:14.534873       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 09:53:14.534885       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:53:14.534898       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:53:14.536237       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:53:14.537411       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:53:14.537551       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:53:14.537658       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-656799"
	I1025 09:53:14.537724       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:53:14.540196       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:53:14.540874       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 09:53:14.542292       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:53:14.552145       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:53:14.552165       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:53:14.552174       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:53:14.554971       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:53:29.540764       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f24211d30df72740a4244dd75e4dae56a95130bca830dd61b819b5b8f831ed8d] <==
	I1025 09:53:15.569289       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:53:15.647614       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:53:15.748575       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:53:15.748611       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 09:53:15.748687       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:53:15.773150       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:53:15.773210       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:53:15.779400       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:53:15.779888       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:53:15.779933       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:53:15.781313       1 config.go:200] "Starting service config controller"
	I1025 09:53:15.781365       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:53:15.781340       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:53:15.781387       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:53:15.781448       1 config.go:309] "Starting node config controller"
	I1025 09:53:15.781458       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:53:15.781698       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:53:15.781734       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:53:15.881548       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:53:15.881574       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:53:15.881620       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:53:15.882909       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [74c34a03974a3e92fbe8350405d3d257c1fa16ecafcc0440cef548b5bc625d99] <==
	E1025 09:53:07.579602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:53:07.579649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:53:07.579815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:53:07.579891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:53:07.579892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:53:07.579965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:53:07.580059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:53:07.579564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:53:08.405820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:53:08.431635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1025 09:53:08.434916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:53:08.448580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:53:08.466302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:53:08.538199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:53:08.557496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:53:08.557665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:53:08.595021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:53:08.598117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:53:08.615599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:53:08.676883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:53:08.739530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:53:08.768899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:53:08.773014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:53:08.850618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1025 09:53:10.476203       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:53:14 no-preload-656799 kubelet[2252]: I1025 09:53:14.564955    2252 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 09:53:14 no-preload-656799 kubelet[2252]: I1025 09:53:14.646957    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ppj7\" (UniqueName: \"kubernetes.io/projected/150e67b8-c0b3-4e74-a94d-a43506de4a53-kube-api-access-6ppj7\") pod \"kube-proxy-gfph2\" (UID: \"150e67b8-c0b3-4e74-a94d-a43506de4a53\") " pod="kube-system/kube-proxy-gfph2"
	Oct 25 09:53:14 no-preload-656799 kubelet[2252]: I1025 09:53:14.647001    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c4a372bb-2500-4e98-9012-a3076916ffe8-cni-cfg\") pod \"kindnet-nbj7f\" (UID: \"c4a372bb-2500-4e98-9012-a3076916ffe8\") " pod="kube-system/kindnet-nbj7f"
	Oct 25 09:53:14 no-preload-656799 kubelet[2252]: I1025 09:53:14.647016    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4a372bb-2500-4e98-9012-a3076916ffe8-xtables-lock\") pod \"kindnet-nbj7f\" (UID: \"c4a372bb-2500-4e98-9012-a3076916ffe8\") " pod="kube-system/kindnet-nbj7f"
	Oct 25 09:53:14 no-preload-656799 kubelet[2252]: I1025 09:53:14.647050    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4a372bb-2500-4e98-9012-a3076916ffe8-lib-modules\") pod \"kindnet-nbj7f\" (UID: \"c4a372bb-2500-4e98-9012-a3076916ffe8\") " pod="kube-system/kindnet-nbj7f"
	Oct 25 09:53:14 no-preload-656799 kubelet[2252]: I1025 09:53:14.647067    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cp6p\" (UniqueName: \"kubernetes.io/projected/c4a372bb-2500-4e98-9012-a3076916ffe8-kube-api-access-2cp6p\") pod \"kindnet-nbj7f\" (UID: \"c4a372bb-2500-4e98-9012-a3076916ffe8\") " pod="kube-system/kindnet-nbj7f"
	Oct 25 09:53:14 no-preload-656799 kubelet[2252]: I1025 09:53:14.647124    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/150e67b8-c0b3-4e74-a94d-a43506de4a53-xtables-lock\") pod \"kube-proxy-gfph2\" (UID: \"150e67b8-c0b3-4e74-a94d-a43506de4a53\") " pod="kube-system/kube-proxy-gfph2"
	Oct 25 09:53:14 no-preload-656799 kubelet[2252]: I1025 09:53:14.647173    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/150e67b8-c0b3-4e74-a94d-a43506de4a53-lib-modules\") pod \"kube-proxy-gfph2\" (UID: \"150e67b8-c0b3-4e74-a94d-a43506de4a53\") " pod="kube-system/kube-proxy-gfph2"
	Oct 25 09:53:14 no-preload-656799 kubelet[2252]: I1025 09:53:14.647212    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/150e67b8-c0b3-4e74-a94d-a43506de4a53-kube-proxy\") pod \"kube-proxy-gfph2\" (UID: \"150e67b8-c0b3-4e74-a94d-a43506de4a53\") " pod="kube-system/kube-proxy-gfph2"
	Oct 25 09:53:14 no-preload-656799 kubelet[2252]: E1025 09:53:14.753903    2252 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 25 09:53:14 no-preload-656799 kubelet[2252]: E1025 09:53:14.753950    2252 projected.go:196] Error preparing data for projected volume kube-api-access-2cp6p for pod kube-system/kindnet-nbj7f: configmap "kube-root-ca.crt" not found
	Oct 25 09:53:14 no-preload-656799 kubelet[2252]: E1025 09:53:14.754028    2252 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c4a372bb-2500-4e98-9012-a3076916ffe8-kube-api-access-2cp6p podName:c4a372bb-2500-4e98-9012-a3076916ffe8 nodeName:}" failed. No retries permitted until 2025-10-25 09:53:15.254000944 +0000 UTC m=+5.303227194 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2cp6p" (UniqueName: "kubernetes.io/projected/c4a372bb-2500-4e98-9012-a3076916ffe8-kube-api-access-2cp6p") pod "kindnet-nbj7f" (UID: "c4a372bb-2500-4e98-9012-a3076916ffe8") : configmap "kube-root-ca.crt" not found
	Oct 25 09:53:14 no-preload-656799 kubelet[2252]: E1025 09:53:14.753903    2252 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 25 09:53:14 no-preload-656799 kubelet[2252]: E1025 09:53:14.754059    2252 projected.go:196] Error preparing data for projected volume kube-api-access-6ppj7 for pod kube-system/kube-proxy-gfph2: configmap "kube-root-ca.crt" not found
	Oct 25 09:53:14 no-preload-656799 kubelet[2252]: E1025 09:53:14.754128    2252 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/150e67b8-c0b3-4e74-a94d-a43506de4a53-kube-api-access-6ppj7 podName:150e67b8-c0b3-4e74-a94d-a43506de4a53 nodeName:}" failed. No retries permitted until 2025-10-25 09:53:15.254100753 +0000 UTC m=+5.303326999 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6ppj7" (UniqueName: "kubernetes.io/projected/150e67b8-c0b3-4e74-a94d-a43506de4a53-kube-api-access-6ppj7") pod "kube-proxy-gfph2" (UID: "150e67b8-c0b3-4e74-a94d-a43506de4a53") : configmap "kube-root-ca.crt" not found
	Oct 25 09:53:16 no-preload-656799 kubelet[2252]: I1025 09:53:16.096176    2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gfph2" podStartSLOduration=2.09615672 podStartE2EDuration="2.09615672s" podCreationTimestamp="2025-10-25 09:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:53:16.085407899 +0000 UTC m=+6.134634150" watchObservedRunningTime="2025-10-25 09:53:16.09615672 +0000 UTC m=+6.145382969"
	Oct 25 09:53:18 no-preload-656799 kubelet[2252]: I1025 09:53:18.088798    2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-nbj7f" podStartSLOduration=1.7873763889999998 podStartE2EDuration="4.088779991s" podCreationTimestamp="2025-10-25 09:53:14 +0000 UTC" firstStartedPulling="2025-10-25 09:53:15.465570388 +0000 UTC m=+5.514796638" lastFinishedPulling="2025-10-25 09:53:17.766974011 +0000 UTC m=+7.816200240" observedRunningTime="2025-10-25 09:53:18.088580445 +0000 UTC m=+8.137806695" watchObservedRunningTime="2025-10-25 09:53:18.088779991 +0000 UTC m=+8.138006241"
	Oct 25 09:53:28 no-preload-656799 kubelet[2252]: I1025 09:53:28.580948    2252 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 09:53:28 no-preload-656799 kubelet[2252]: I1025 09:53:28.654394    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwxbv\" (UniqueName: \"kubernetes.io/projected/4e4f58ae-a176-4a16-a7ec-035c2170c2c3-kube-api-access-kwxbv\") pod \"storage-provisioner\" (UID: \"4e4f58ae-a176-4a16-a7ec-035c2170c2c3\") " pod="kube-system/storage-provisioner"
	Oct 25 09:53:28 no-preload-656799 kubelet[2252]: I1025 09:53:28.654452    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8784813-9a51-43f5-ae3a-d5f9a1cd7d41-config-volume\") pod \"coredns-66bc5c9577-sw9hv\" (UID: \"b8784813-9a51-43f5-ae3a-d5f9a1cd7d41\") " pod="kube-system/coredns-66bc5c9577-sw9hv"
	Oct 25 09:53:28 no-preload-656799 kubelet[2252]: I1025 09:53:28.654491    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rphvn\" (UniqueName: \"kubernetes.io/projected/b8784813-9a51-43f5-ae3a-d5f9a1cd7d41-kube-api-access-rphvn\") pod \"coredns-66bc5c9577-sw9hv\" (UID: \"b8784813-9a51-43f5-ae3a-d5f9a1cd7d41\") " pod="kube-system/coredns-66bc5c9577-sw9hv"
	Oct 25 09:53:28 no-preload-656799 kubelet[2252]: I1025 09:53:28.654513    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4e4f58ae-a176-4a16-a7ec-035c2170c2c3-tmp\") pod \"storage-provisioner\" (UID: \"4e4f58ae-a176-4a16-a7ec-035c2170c2c3\") " pod="kube-system/storage-provisioner"
	Oct 25 09:53:29 no-preload-656799 kubelet[2252]: I1025 09:53:29.139058    2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.139033686 podStartE2EDuration="14.139033686s" podCreationTimestamp="2025-10-25 09:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:53:29.127240733 +0000 UTC m=+19.176466983" watchObservedRunningTime="2025-10-25 09:53:29.139033686 +0000 UTC m=+19.188259935"
	Oct 25 09:53:31 no-preload-656799 kubelet[2252]: I1025 09:53:31.148528    2252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sw9hv" podStartSLOduration=16.148499549 podStartE2EDuration="16.148499549s" podCreationTimestamp="2025-10-25 09:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:53:29.139319549 +0000 UTC m=+19.188545799" watchObservedRunningTime="2025-10-25 09:53:31.148499549 +0000 UTC m=+21.197725800"
	Oct 25 09:53:31 no-preload-656799 kubelet[2252]: I1025 09:53:31.172126    2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc4xt\" (UniqueName: \"kubernetes.io/projected/e58484e4-93ad-4c1e-af87-8034efb88486-kube-api-access-hc4xt\") pod \"busybox\" (UID: \"e58484e4-93ad-4c1e-af87-8034efb88486\") " pod="default/busybox"
	
	
	==> storage-provisioner [d69ea4a09075701d8a6c6235620e79f02aa6ffea5a312f1dfd78a8fb9b63b647] <==
	I1025 09:53:29.019979       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:53:29.030913       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:53:29.031064       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:53:29.033502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:29.042502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:53:29.042818       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:53:29.043978       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7977a3c3-46cf-4478-80b1-82f8aa5df618", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-656799_db4fd9df-69f6-42ed-85cc-26f58d626c02 became leader
	W1025 09:53:29.046902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:53:29.047331       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-656799_db4fd9df-69f6-42ed-85cc-26f58d626c02!
	W1025 09:53:29.054842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:53:29.149017       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-656799_db4fd9df-69f6-42ed-85cc-26f58d626c02!
	W1025 09:53:31.058413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:31.063629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:33.067040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:33.071491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:35.074707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:35.079133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:37.081563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:37.086265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:39.089228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:39.093310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-656799 -n no-preload-656799
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-656799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-880773 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-880773 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (302.178591ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:53:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-880773 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-880773 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-880773 describe deploy/metrics-server -n kube-system: exit status 1 (97.899502ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-880773 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-880773
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-880773:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97",
	        "Created": "2025-10-25T09:52:38.521061713Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 425348,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:52:38.563030553Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97/hosts",
	        "LogPath": "/var/lib/docker/containers/9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97/9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97-json.log",
	        "Name": "/default-k8s-diff-port-880773",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-880773:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-880773",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97",
	                "LowerDir": "/var/lib/docker/overlay2/7406dd3ccf074a8c0d63e89c8d8fb56dbbf724c2e72ef4e5d3645a687d36caae-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7406dd3ccf074a8c0d63e89c8d8fb56dbbf724c2e72ef4e5d3645a687d36caae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7406dd3ccf074a8c0d63e89c8d8fb56dbbf724c2e72ef4e5d3645a687d36caae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7406dd3ccf074a8c0d63e89c8d8fb56dbbf724c2e72ef4e5d3645a687d36caae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-880773",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-880773/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-880773",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-880773",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-880773",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "762d4d4aebd1b5946045e724c3162305eeb0cd3df9b8462c23378e19a1963f4e",
	            "SandboxKey": "/var/run/docker/netns/762d4d4aebd1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33220"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33221"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33224"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33222"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33223"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-880773": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:b6:95:70:7f:2c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6ddf7a97662fac8be0712f15b409763064fa73f60cb64be86aabc92b884c53a0",
	                    "EndpointID": "564e637a02f81a63ecaa0db48de8d1aa575eb82a262c2781bf6c65d2b1391913",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-880773",
	                        "9f0bdf9b54bd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-880773 -n default-k8s-diff-port-880773
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-880773 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-880773 logs -n 25: (1.142300169s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p enable-default-cni-035825 sudo systemctl cat crio --no-pager                                                                                                                                                                               │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                     │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ ssh     │ -p enable-default-cni-035825 sudo crio config                                                                                                                                                                                                 │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ start   │ -p default-k8s-diff-port-880773 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:53 UTC │
	│ delete  │ -p enable-default-cni-035825                                                                                                                                                                                                                  │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ start   │ -p newest-cni-042675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-042675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ stop    │ -p newest-cni-042675 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-042675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p newest-cni-042675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-676314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ stop    │ -p old-k8s-version-676314 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ image   │ newest-cni-042675 image list --format=json                                                                                                                                                                                                    │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ pause   │ -p newest-cni-042675 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ delete  │ -p newest-cni-042675                                                                                                                                                                                                                          │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p no-preload-656799 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ delete  │ -p newest-cni-042675                                                                                                                                                                                                                          │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ delete  │ -p disable-driver-mounts-001549                                                                                                                                                                                                               │ disable-driver-mounts-001549 │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p embed-certs-846915 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ stop    │ -p no-preload-656799 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-676314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p old-k8s-version-676314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-880773 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-656799 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p no-preload-656799 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:53:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:53:58.233610  445741 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:53:58.233991  445741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:53:58.234028  445741 out.go:374] Setting ErrFile to fd 2...
	I1025 09:53:58.234041  445741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:53:58.234265  445741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:53:58.234728  445741 out.go:368] Setting JSON to false
	I1025 09:53:58.236262  445741 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5782,"bootTime":1761380256,"procs":370,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:53:58.236428  445741 start.go:141] virtualization: kvm guest
	I1025 09:53:58.238485  445741 out.go:179] * [no-preload-656799] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:53:58.239575  445741 notify.go:220] Checking for updates...
	I1025 09:53:58.239587  445741 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:53:58.240798  445741 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:53:58.241779  445741 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:53:58.242912  445741 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:53:58.244132  445741 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:53:58.245508  445741 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:53:58.247390  445741 config.go:182] Loaded profile config "no-preload-656799": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:53:58.248106  445741 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:53:58.277221  445741 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:53:58.277451  445741 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:53:58.350529  445741 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-25 09:53:58.336321115 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:53:58.350665  445741 docker.go:318] overlay module found
	I1025 09:53:58.352168  445741 out.go:179] * Using the docker driver based on existing profile
	I1025 09:53:58.353208  445741 start.go:305] selected driver: docker
	I1025 09:53:58.353236  445741 start.go:925] validating driver "docker" against &{Name:no-preload-656799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-656799 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:53:58.353402  445741 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:53:58.354309  445741 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:53:58.436211  445741 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-25 09:53:58.423839322 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:53:58.436696  445741 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:53:58.436749  445741 cni.go:84] Creating CNI manager for ""
	I1025 09:53:58.436822  445741 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:53:58.436874  445741 start.go:349] cluster config:
	{Name:no-preload-656799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-656799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:53:58.438462  445741 out.go:179] * Starting "no-preload-656799" primary control-plane node in "no-preload-656799" cluster
	I1025 09:53:58.439564  445741 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:53:58.440847  445741 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:53:58.441861  445741 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:53:58.441956  445741 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:53:58.442008  445741 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/no-preload-656799/config.json ...
	I1025 09:53:58.442115  445741 cache.go:107] acquiring lock: {Name:mk793f4f1a518f55bdfcf74f917ef6235140b2e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:53:58.442189  445741 cache.go:107] acquiring lock: {Name:mkbd36787f6d9c917e25b00c60d785af776b63b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:53:58.442244  445741 cache.go:115] /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 09:53:58.442261  445741 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 167.988µs
	I1025 09:53:58.442272  445741 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 09:53:58.442271  445741 cache.go:115] /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1025 09:53:58.442241  445741 cache.go:107] acquiring lock: {Name:mkbda3860fe49b7f62d5491caee1294621103b21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:53:58.442281  445741 cache.go:107] acquiring lock: {Name:mk04e93bec7fd7aa93784efad066bfdceb598130 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:53:58.442290  445741 cache.go:107] acquiring lock: {Name:mk98db0d9d5cc15e86fcea6c5cb99f34d58a795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:53:58.442338  445741 cache.go:115] /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1025 09:53:58.442357  445741 cache.go:115] /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1025 09:53:58.442315  445741 cache.go:107] acquiring lock: {Name:mk1ac2e08a7f3ee3d4b3ddeb67fcde5be39c6753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:53:58.442367  445741 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 79.457µs
	I1025 09:53:58.442287  445741 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 111.842µs
	I1025 09:53:58.442377  445741 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1025 09:53:58.442381  445741 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1025 09:53:58.442379  445741 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 102.476µs
	I1025 09:53:58.442389  445741 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1025 09:53:58.442395  445741 cache.go:107] acquiring lock: {Name:mk722820a110a23db7b43134eed27d8d2152e615 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:53:58.442122  445741 cache.go:107] acquiring lock: {Name:mk6f22cc9266a016319222db1ba438a0109b1167 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:53:58.442566  445741 cache.go:115] /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1025 09:53:58.442566  445741 cache.go:115] /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1025 09:53:58.442565  445741 cache.go:115] /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1025 09:53:58.442581  445741 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 188.013µs
	I1025 09:53:58.442583  445741 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 275.553µs
	I1025 09:53:58.442590  445741 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1025 09:53:58.442592  445741 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1025 09:53:58.442641  445741 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 479.44µs
	I1025 09:53:58.442656  445741 cache.go:115] /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1025 09:53:58.442659  445741 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1025 09:53:58.442672  445741 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 489.909µs
	I1025 09:53:58.442691  445741 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21794-130604/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1025 09:53:58.442705  445741 cache.go:87] Successfully saved all images to host disk.
	I1025 09:53:58.468193  445741 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:53:58.468211  445741 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:53:58.468231  445741 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:53:58.468254  445741 start.go:360] acquireMachinesLock for no-preload-656799: {Name:mk78d0b758bc2c48360cb4d3aac5a2e0998c28fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:53:58.468323  445741 start.go:364] duration metric: took 51.01µs to acquireMachinesLock for "no-preload-656799"
	I1025 09:53:58.468375  445741 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:53:58.468383  445741 fix.go:54] fixHost starting: 
	I1025 09:53:58.468653  445741 cli_runner.go:164] Run: docker container inspect no-preload-656799 --format={{.State.Status}}
	I1025 09:53:58.491008  445741 fix.go:112] recreateIfNeeded on no-preload-656799: state=Stopped err=<nil>
	W1025 09:53:58.491043  445741 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 25 09:53:46 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:46.269636177Z" level=info msg="Starting container: 242078b30ad5adee4804041ae3161f6d3510c4de76532b1ba8f8893680df6466" id=a078fc9b-0dba-4cfe-b526-b83096087a5d name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:53:46 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:46.27166667Z" level=info msg="Started container" PID=1869 containerID=242078b30ad5adee4804041ae3161f6d3510c4de76532b1ba8f8893680df6466 description=kube-system/coredns-66bc5c9577-29ltg/coredns id=a078fc9b-0dba-4cfe-b526-b83096087a5d name=/runtime.v1.RuntimeService/StartContainer sandboxID=bce43c0e17ee4f727c56f8c92634c1064bf393c179f9e7356bca69fb84605a84
	Oct 25 09:53:48 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:48.803786389Z" level=info msg="Running pod sandbox: default/busybox/POD" id=bb5ceef3-4758-47b1-8197-ecf0ca608913 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:53:48 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:48.803885672Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:48 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:48.809798912Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:846184e7a4c53bea216be4e04e9851ba75fb57bac52ef8edd1faeec5be3aaa68 UID:f76f3cf0-8a0d-49fb-82e3-f5be92acdc5c NetNS:/var/run/netns/78fa9b40-577e-44f7-b9f7-eb8350c1b4a9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b158}] Aliases:map[]}"
	Oct 25 09:53:48 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:48.809834453Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:53:48 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:48.819692101Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:846184e7a4c53bea216be4e04e9851ba75fb57bac52ef8edd1faeec5be3aaa68 UID:f76f3cf0-8a0d-49fb-82e3-f5be92acdc5c NetNS:/var/run/netns/78fa9b40-577e-44f7-b9f7-eb8350c1b4a9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b158}] Aliases:map[]}"
	Oct 25 09:53:48 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:48.819819298Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 09:53:48 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:48.820571225Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:53:48 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:48.821281918Z" level=info msg="Ran pod sandbox 846184e7a4c53bea216be4e04e9851ba75fb57bac52ef8edd1faeec5be3aaa68 with infra container: default/busybox/POD" id=bb5ceef3-4758-47b1-8197-ecf0ca608913 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:53:48 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:48.822674148Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=72e6f303-e57d-4474-80f4-e4dab1ed8bc0 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:48 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:48.822795408Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=72e6f303-e57d-4474-80f4-e4dab1ed8bc0 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:48 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:48.822838534Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=72e6f303-e57d-4474-80f4-e4dab1ed8bc0 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:48 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:48.823538563Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3d970c25-3484-4d81-9624-84ef0380a7c8 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:53:48 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:48.825109631Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 09:53:50 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:50.875768968Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=3d970c25-3484-4d81-9624-84ef0380a7c8 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:53:50 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:50.876633765Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5ed9bb65-fdc7-43cf-9d26-cf85af74b778 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:50 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:50.878176947Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c1b4f004-9d70-4dbd-a763-3148c40f5d55 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:53:50 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:50.881818821Z" level=info msg="Creating container: default/busybox/busybox" id=15c42899-78ce-4170-af9f-6e8e984526d4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:53:50 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:50.881933572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:50 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:50.886697872Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:50 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:50.887168968Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:53:50 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:50.923998856Z" level=info msg="Created container 984c91504a3f3fd938b0a84fc738e7d0fde874423b1c7c973bf86f7dd5b9dac7: default/busybox/busybox" id=15c42899-78ce-4170-af9f-6e8e984526d4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:53:50 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:50.924703223Z" level=info msg="Starting container: 984c91504a3f3fd938b0a84fc738e7d0fde874423b1c7c973bf86f7dd5b9dac7" id=78af55d1-2cc0-42c9-970b-708b36164b2f name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:53:50 default-k8s-diff-port-880773 crio[770]: time="2025-10-25T09:53:50.927640933Z" level=info msg="Started container" PID=1945 containerID=984c91504a3f3fd938b0a84fc738e7d0fde874423b1c7c973bf86f7dd5b9dac7 description=default/busybox/busybox id=78af55d1-2cc0-42c9-970b-708b36164b2f name=/runtime.v1.RuntimeService/StartContainer sandboxID=846184e7a4c53bea216be4e04e9851ba75fb57bac52ef8edd1faeec5be3aaa68
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	984c91504a3f3       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago        Running             busybox                   0                   846184e7a4c53       busybox                                                default
	242078b30ad5a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago       Running             coredns                   0                   bce43c0e17ee4       coredns-66bc5c9577-29ltg                               kube-system
	eb9c116188ca0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago       Running             storage-provisioner       0                   fe589dd0f44bf       storage-provisioner                                    kube-system
	de71740497eb8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      54 seconds ago       Running             kindnet-cni               0                   afd0ca0ad0919       kindnet-cnqn8                                          kube-system
	beae70637062a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      54 seconds ago       Running             kube-proxy                0                   6553275b74316       kube-proxy-bg94v                                       kube-system
	4606b3c16e741       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      About a minute ago   Running             kube-scheduler            0                   0d38d3a564be3       kube-scheduler-default-k8s-diff-port-880773            kube-system
	07f2022b6c87c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      About a minute ago   Running             kube-controller-manager   0                   d7b75d65164d8       kube-controller-manager-default-k8s-diff-port-880773   kube-system
	e401c3230e18e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      About a minute ago   Running             etcd                      0                   bc514ddfdca81       etcd-default-k8s-diff-port-880773                      kube-system
	ffb715906d4d4       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      About a minute ago   Running             kube-apiserver            0                   3736575472a2a       kube-apiserver-default-k8s-diff-port-880773            kube-system
	
	
	==> coredns [242078b30ad5adee4804041ae3161f6d3510c4de76532b1ba8f8893680df6466] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50435 - 32203 "HINFO IN 7232698064700582845.2322731177094636835. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027792949s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-880773
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-880773
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=default-k8s-diff-port-880773
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_53_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:52:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-880773
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:53:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:53:50 +0000   Sat, 25 Oct 2025 09:52:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:53:50 +0000   Sat, 25 Oct 2025 09:52:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:53:50 +0000   Sat, 25 Oct 2025 09:52:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:53:50 +0000   Sat, 25 Oct 2025 09:53:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-880773
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                0255f5ba-c095-4977-bf24-556780863944
	  Boot ID:                    69cac88c-fbae-449a-9884-8eb99653f5b9
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-29ltg                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     55s
	  kube-system                 etcd-default-k8s-diff-port-880773                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         60s
	  kube-system                 kindnet-cnqn8                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-880773             250m (3%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-880773    200m (2%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-bg94v                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-880773             100m (1%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 54s                kube-proxy       
	  Normal  Starting                 65s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)  kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)  kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x8 over 65s)  kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasSufficientPID
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s                kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s                kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s                kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s                node-controller  Node default-k8s-diff-port-880773 event: Registered Node default-k8s-diff-port-880773 in Controller
	  Normal  NodeReady                14s                kubelet          Node default-k8s-diff-port-880773 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000024] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[Oct25 09:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[ +17.952906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 b8 8e e3 56 c9 08 06
	[  +0.000656] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[Oct25 09:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	[ +20.335832] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +1.293644] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[Oct25 09:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 68 92 7c c6 14 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +0.270958] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a d0 7b 0e 4a 8d 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[ +10.676024] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000020] ll header: 00000000: ff ff ff ff ff ff 1a 10 31 a9 02 ae 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	
	
	==> etcd [e401c3230e18e232eed7d8c7cb2bf93c5f35ae1de60e302a88031a575d62ad3a] <==
	{"level":"warn","ts":"2025-10-25T09:52:57.219725Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-25T09:52:56.825008Z","time spent":"393.861943ms","remote":"127.0.0.1:59138","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":338,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/namespaces/kube-system\" mod_revision:0 > success:<request_put:<key:\"/registry/namespaces/kube-system\" value_size:298 >> failure:<>"}
	{"level":"warn","ts":"2025-10-25T09:52:57.218476Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"393.349364ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-25T09:52:57.219839Z","caller":"traceutil/trace.go:172","msg":"trace[1874133880] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:0; response_revision:4; }","duration":"394.738533ms","start":"2025-10-25T09:52:56.825080Z","end":"2025-10-25T09:52:57.219819Z","steps":["trace[1874133880] 'agreement among raft nodes before linearized reading'  (duration: 312.707633ms)","trace[1874133880] 'range keys from in-memory index tree'  (duration: 80.62277ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T09:52:57.219874Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-25T09:52:56.825074Z","time spent":"394.786463ms","remote":"127.0.0.1:59138","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":0,"response size":28,"request content":"key:\"/registry/namespaces/kube-system\" limit:1 "}
	{"level":"warn","ts":"2025-10-25T09:52:57.219974Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-25T09:52:56.876568Z","time spent":"342.637692ms","remote":"127.0.0.1:60056","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":992,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1.admissionregistration.k8s.io\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1.admissionregistration.k8s.io\" value_size:908 >> failure:<>"}
	{"level":"warn","ts":"2025-10-25T09:52:57.220135Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-25T09:52:56.874447Z","time spent":"344.799783ms","remote":"127.0.0.1:59138","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/namespaces/kube-system\" mod_revision:0 > success:<request_put:<key:\"/registry/namespaces/kube-system\" value_size:298 >> failure:<>"}
	{"level":"warn","ts":"2025-10-25T09:52:57.220238Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-25T09:52:56.876533Z","time spent":"342.779199ms","remote":"127.0.0.1:59800","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":688,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/prioritylevelconfigurations/system\" mod_revision:0 > success:<request_put:<key:\"/registry/prioritylevelconfigurations/system\" value_size:636 >> failure:<>"}
	{"level":"info","ts":"2025-10-25T09:52:57.221718Z","caller":"traceutil/trace.go:172","msg":"trace[713022490] transaction","detail":"{read_only:false; response_revision:8; number_of_response:1; }","duration":"344.945661ms","start":"2025-10-25T09:52:56.876721Z","end":"2025-10-25T09:52:57.221667Z","steps":["trace[713022490] 'process raft request'  (duration: 344.794225ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:52:57.221802Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-25T09:52:56.876701Z","time spent":"345.058367ms","remote":"127.0.0.1:60056","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":968,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1.apiextensions.k8s.io\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1.apiextensions.k8s.io\" value_size:892 >> failure:<>"}
	{"level":"info","ts":"2025-10-25T09:52:57.221836Z","caller":"traceutil/trace.go:172","msg":"trace[649897082] transaction","detail":"{read_only:false; response_revision:13; number_of_response:1; }","duration":"198.765406ms","start":"2025-10-25T09:52:57.023054Z","end":"2025-10-25T09:52:57.221820Z","steps":["trace[649897082] 'process raft request'  (duration: 198.708504ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T09:52:57.222338Z","caller":"traceutil/trace.go:172","msg":"trace[1008863950] transaction","detail":"{read_only:false; response_revision:10; number_of_response:1; }","duration":"345.537581ms","start":"2025-10-25T09:52:56.876788Z","end":"2025-10-25T09:52:57.222326Z","steps":["trace[1008863950] 'process raft request'  (duration: 344.862549ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:52:57.222415Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-25T09:52:56.876779Z","time spent":"345.607987ms","remote":"127.0.0.1:60056","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":971,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1.authentication.k8s.io\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1.authentication.k8s.io\" value_size:894 >> failure:<>"}
	{"level":"info","ts":"2025-10-25T09:52:57.222824Z","caller":"traceutil/trace.go:172","msg":"trace[256179675] transaction","detail":"{read_only:false; response_revision:11; number_of_response:1; }","duration":"345.819844ms","start":"2025-10-25T09:52:56.876799Z","end":"2025-10-25T09:52:57.222618Z","steps":["trace[256179675] 'process raft request'  (duration: 344.887069ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:52:57.222901Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-25T09:52:56.876791Z","time spent":"346.075605ms","remote":"127.0.0.1:60056","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":883,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1.\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1.\" value_size:827 >> failure:<>"}
	{"level":"info","ts":"2025-10-25T09:52:57.223575Z","caller":"traceutil/trace.go:172","msg":"trace[1509294075] transaction","detail":"{read_only:false; response_revision:12; number_of_response:1; }","duration":"335.806966ms","start":"2025-10-25T09:52:56.887581Z","end":"2025-10-25T09:52:57.223388Z","steps":["trace[1509294075] 'process raft request'  (duration: 334.140269ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:52:57.223650Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-25T09:52:56.887563Z","time spent":"336.05401ms","remote":"127.0.0.1:59530","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":280,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/servicecidrs/kubernetes\" mod_revision:0 > success:<request_put:<key:\"/registry/servicecidrs/kubernetes\" value_size:239 >> failure:<>"}
	{"level":"info","ts":"2025-10-25T09:52:57.223400Z","caller":"traceutil/trace.go:172","msg":"trace[74167294] transaction","detail":"{read_only:false; response_revision:9; number_of_response:1; }","duration":"346.602214ms","start":"2025-10-25T09:52:56.876778Z","end":"2025-10-25T09:52:57.223380Z","steps":["trace[74167294] 'process raft request'  (duration: 344.820661ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:52:57.223814Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-25T09:52:56.876759Z","time spent":"347.023162ms","remote":"127.0.0.1:60056","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":920,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1.apps\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1.apps\" value_size:860 >> failure:<>"}
	{"level":"warn","ts":"2025-10-25T09:52:57.224074Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.271364ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:350"}
	{"level":"info","ts":"2025-10-25T09:52:57.224121Z","caller":"traceutil/trace.go:172","msg":"trace[1617210451] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:13; }","duration":"121.31363ms","start":"2025-10-25T09:52:57.102791Z","end":"2025-10-25T09:52:57.224104Z","steps":["trace[1617210451] 'agreement among raft nodes before linearized reading'  (duration: 121.136836ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:52:57.224295Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.242819ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/default-k8s-diff-port-880773\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-25T09:52:57.224380Z","caller":"traceutil/trace.go:172","msg":"trace[657417008] range","detail":"{range_begin:/registry/csinodes/default-k8s-diff-port-880773; range_end:; response_count:0; response_revision:13; }","duration":"154.337963ms","start":"2025-10-25T09:52:57.070019Z","end":"2025-10-25T09:52:57.224357Z","steps":["trace[657417008] 'agreement among raft nodes before linearized reading'  (duration: 154.189473ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:52:57.225319Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"198.869853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/apiserver-25s74272w5kcbzwf7h42zjur3i\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-25T09:52:57.225379Z","caller":"traceutil/trace.go:172","msg":"trace[1296281420] range","detail":"{range_begin:/registry/leases/kube-system/apiserver-25s74272w5kcbzwf7h42zjur3i; range_end:; response_count:0; response_revision:13; }","duration":"198.932217ms","start":"2025-10-25T09:52:57.026436Z","end":"2025-10-25T09:52:57.225368Z","steps":["trace[1296281420] 'agreement among raft nodes before linearized reading'  (duration: 198.823637ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:53:43.732759Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.074664ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765721096594075 > lease_revoke:<id:5b339a1ac90d2246>","response":"size:29"}
	
	
	==> kernel <==
	 09:53:59 up  1:36,  0 user,  load average: 5.91, 4.49, 2.71
	Linux default-k8s-diff-port-880773 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [de71740497eb8baff993ad07391322d6a10c44e36e004d3a5f5aa5824f7a6288] <==
	I1025 09:53:05.099674       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:53:05.100064       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1025 09:53:05.100953       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:53:05.100967       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:53:05.100988       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:53:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:53:05.370837       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:53:05.370873       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:53:05.370884       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:53:05.372769       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 09:53:35.372503       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:53:35.372507       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:53:35.372649       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:53:35.394055       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1025 09:53:36.871521       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:53:36.871560       1 metrics.go:72] Registering metrics
	I1025 09:53:36.871633       1 controller.go:711] "Syncing nftables rules"
	I1025 09:53:45.374475       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:53:45.374534       1 main.go:301] handling current node
	I1025 09:53:55.373466       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:53:55.373523       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ffb715906d4d4aa3c1feef590ed14d657dbe4dd28f35415aa8a05b42fe18f7f5] <==
	E1025 09:52:56.823105       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1025 09:52:56.823661       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:52:57.224796       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1025 09:52:57.229947       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1025 09:52:57.230314       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 09:52:57.233133       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:52:57.237643       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:52:57.237735       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:52:57.679045       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 09:52:57.684703       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 09:52:57.684837       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:52:58.305908       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:52:58.350417       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:52:58.435481       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 09:52:58.444334       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1025 09:52:58.445808       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:52:58.452070       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:52:58.731952       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:52:59.395664       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:52:59.420838       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 09:52:59.441770       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:53:04.484577       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1025 09:53:04.650732       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:53:04.684737       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:53:04.691330       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [07f2022b6c87c0ff83c1d6a692a13ecabcac3b44a3986c05bc130b8f53a8a81d] <==
	I1025 09:53:03.730421       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:53:03.730516       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:53:03.731426       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 09:53:03.731478       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:53:03.731478       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:53:03.731512       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:53:03.731628       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:53:03.731879       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:53:03.732177       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:53:03.733273       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:53:03.735301       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:53:03.735700       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:53:03.737639       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:53:03.738925       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:53:03.741152       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:53:03.741181       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 09:53:03.741249       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 09:53:03.741289       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 09:53:03.741298       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 09:53:03.741303       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 09:53:03.743410       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:53:03.745726       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:53:03.748069       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-880773" podCIDRs=["10.244.0.0/24"]
	I1025 09:53:03.762878       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:53:48.736746       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [beae70637062aeef6dcc210ece23ac93b18ba6d6244864776b48f933faf5f7e8] <==
	I1025 09:53:05.008206       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:53:05.117188       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:53:05.217622       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:53:05.217745       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1025 09:53:05.217899       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:53:05.250581       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:53:05.250697       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:53:05.261673       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:53:05.262341       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:53:05.262652       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:53:05.265614       1 config.go:200] "Starting service config controller"
	I1025 09:53:05.265631       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:53:05.265760       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:53:05.265766       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:53:05.265808       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:53:05.265814       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:53:05.272417       1 config.go:309] "Starting node config controller"
	I1025 09:53:05.272560       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:53:05.272591       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:53:05.365818       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:53:05.365838       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:53:05.365934       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4606b3c16e7413e2e7dddc066a17d85e1b816065cd49cee3407cca9da3ed27c5] <==
	E1025 09:52:57.036117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:52:57.036497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:52:57.036522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:52:57.036575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:52:57.036644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:52:57.036675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:52:57.036727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:52:57.037014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:52:57.037208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:52:57.037217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:52:57.036826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:52:57.037333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:52:57.037372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:52:57.037667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:52:57.037685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:52:57.037756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:52:57.037876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:52:57.866133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:52:57.874068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:52:57.900810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:52:57.921606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:52:57.987643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:52:58.042216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:52:58.053263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1025 09:53:00.134094       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:53:00 default-k8s-diff-port-880773 kubelet[1334]: E1025 09:53:00.477690    1334 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-880773\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-880773"
	Oct 25 09:53:00 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:00.509803    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-880773" podStartSLOduration=1.509783643 podStartE2EDuration="1.509783643s" podCreationTimestamp="2025-10-25 09:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:53:00.509311304 +0000 UTC m=+1.290744383" watchObservedRunningTime="2025-10-25 09:53:00.509783643 +0000 UTC m=+1.291216721"
	Oct 25 09:53:00 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:00.519444    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-880773" podStartSLOduration=1.519423854 podStartE2EDuration="1.519423854s" podCreationTimestamp="2025-10-25 09:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:53:00.519334874 +0000 UTC m=+1.300767952" watchObservedRunningTime="2025-10-25 09:53:00.519423854 +0000 UTC m=+1.300856932"
	Oct 25 09:53:00 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:00.528190    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-880773" podStartSLOduration=1.5281702959999999 podStartE2EDuration="1.528170296s" podCreationTimestamp="2025-10-25 09:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:53:00.527960557 +0000 UTC m=+1.309393635" watchObservedRunningTime="2025-10-25 09:53:00.528170296 +0000 UTC m=+1.309603374"
	Oct 25 09:53:00 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:00.545584    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-880773" podStartSLOduration=1.545566248 podStartE2EDuration="1.545566248s" podCreationTimestamp="2025-10-25 09:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:53:00.536811786 +0000 UTC m=+1.318244889" watchObservedRunningTime="2025-10-25 09:53:00.545566248 +0000 UTC m=+1.326999325"
	Oct 25 09:53:03 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:03.762019    1334 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 09:53:03 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:03.762743    1334 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 09:53:04 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:04.581339    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkf2v\" (UniqueName: \"kubernetes.io/projected/4b7ad6fe-03c3-41dd-9633-6ed6a648201f-kube-api-access-wkf2v\") pod \"kube-proxy-bg94v\" (UID: \"4b7ad6fe-03c3-41dd-9633-6ed6a648201f\") " pod="kube-system/kube-proxy-bg94v"
	Oct 25 09:53:04 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:04.581419    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c804731f-754b-4ce1-9609-1a6fc8cf317c-xtables-lock\") pod \"kindnet-cnqn8\" (UID: \"c804731f-754b-4ce1-9609-1a6fc8cf317c\") " pod="kube-system/kindnet-cnqn8"
	Oct 25 09:53:04 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:04.581446    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c804731f-754b-4ce1-9609-1a6fc8cf317c-lib-modules\") pod \"kindnet-cnqn8\" (UID: \"c804731f-754b-4ce1-9609-1a6fc8cf317c\") " pod="kube-system/kindnet-cnqn8"
	Oct 25 09:53:04 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:04.581470    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b7ad6fe-03c3-41dd-9633-6ed6a648201f-lib-modules\") pod \"kube-proxy-bg94v\" (UID: \"4b7ad6fe-03c3-41dd-9633-6ed6a648201f\") " pod="kube-system/kube-proxy-bg94v"
	Oct 25 09:53:04 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:04.581491    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn22f\" (UniqueName: \"kubernetes.io/projected/c804731f-754b-4ce1-9609-1a6fc8cf317c-kube-api-access-qn22f\") pod \"kindnet-cnqn8\" (UID: \"c804731f-754b-4ce1-9609-1a6fc8cf317c\") " pod="kube-system/kindnet-cnqn8"
	Oct 25 09:53:04 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:04.581515    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4b7ad6fe-03c3-41dd-9633-6ed6a648201f-kube-proxy\") pod \"kube-proxy-bg94v\" (UID: \"4b7ad6fe-03c3-41dd-9633-6ed6a648201f\") " pod="kube-system/kube-proxy-bg94v"
	Oct 25 09:53:04 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:04.581535    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b7ad6fe-03c3-41dd-9633-6ed6a648201f-xtables-lock\") pod \"kube-proxy-bg94v\" (UID: \"4b7ad6fe-03c3-41dd-9633-6ed6a648201f\") " pod="kube-system/kube-proxy-bg94v"
	Oct 25 09:53:04 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:04.581555    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c804731f-754b-4ce1-9609-1a6fc8cf317c-cni-cfg\") pod \"kindnet-cnqn8\" (UID: \"c804731f-754b-4ce1-9609-1a6fc8cf317c\") " pod="kube-system/kindnet-cnqn8"
	Oct 25 09:53:05 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:05.498019    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cnqn8" podStartSLOduration=1.497996919 podStartE2EDuration="1.497996919s" podCreationTimestamp="2025-10-25 09:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:53:05.497936145 +0000 UTC m=+6.279369223" watchObservedRunningTime="2025-10-25 09:53:05.497996919 +0000 UTC m=+6.279429996"
	Oct 25 09:53:05 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:05.510172    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bg94v" podStartSLOduration=1.5101502230000001 podStartE2EDuration="1.510150223s" podCreationTimestamp="2025-10-25 09:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:53:05.510132745 +0000 UTC m=+6.291565824" watchObservedRunningTime="2025-10-25 09:53:05.510150223 +0000 UTC m=+6.291583301"
	Oct 25 09:53:45 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:45.860040    1334 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 09:53:45 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:45.977390    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xzqs\" (UniqueName: \"kubernetes.io/projected/469fcc4c-281e-4595-aa3b-4ea853afb153-kube-api-access-6xzqs\") pod \"storage-provisioner\" (UID: \"469fcc4c-281e-4595-aa3b-4ea853afb153\") " pod="kube-system/storage-provisioner"
	Oct 25 09:53:45 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:45.977457    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jm6n\" (UniqueName: \"kubernetes.io/projected/5d5247ec-619e-4bcb-82c5-1d5c0b42b685-kube-api-access-4jm6n\") pod \"coredns-66bc5c9577-29ltg\" (UID: \"5d5247ec-619e-4bcb-82c5-1d5c0b42b685\") " pod="kube-system/coredns-66bc5c9577-29ltg"
	Oct 25 09:53:45 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:45.977480    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/469fcc4c-281e-4595-aa3b-4ea853afb153-tmp\") pod \"storage-provisioner\" (UID: \"469fcc4c-281e-4595-aa3b-4ea853afb153\") " pod="kube-system/storage-provisioner"
	Oct 25 09:53:45 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:45.977514    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d5247ec-619e-4bcb-82c5-1d5c0b42b685-config-volume\") pod \"coredns-66bc5c9577-29ltg\" (UID: \"5d5247ec-619e-4bcb-82c5-1d5c0b42b685\") " pod="kube-system/coredns-66bc5c9577-29ltg"
	Oct 25 09:53:46 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:46.601191    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.601169037 podStartE2EDuration="41.601169037s" podCreationTimestamp="2025-10-25 09:53:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:53:46.600915525 +0000 UTC m=+47.382348603" watchObservedRunningTime="2025-10-25 09:53:46.601169037 +0000 UTC m=+47.382602114"
	Oct 25 09:53:46 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:46.611077    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-29ltg" podStartSLOduration=42.61105402 podStartE2EDuration="42.61105402s" podCreationTimestamp="2025-10-25 09:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:53:46.610773581 +0000 UTC m=+47.392206659" watchObservedRunningTime="2025-10-25 09:53:46.61105402 +0000 UTC m=+47.392487099"
	Oct 25 09:53:48 default-k8s-diff-port-880773 kubelet[1334]: I1025 09:53:48.596784    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcpkr\" (UniqueName: \"kubernetes.io/projected/f76f3cf0-8a0d-49fb-82e3-f5be92acdc5c-kube-api-access-tcpkr\") pod \"busybox\" (UID: \"f76f3cf0-8a0d-49fb-82e3-f5be92acdc5c\") " pod="default/busybox"
	
	
	==> storage-provisioner [eb9c116188ca0da0da82ea8ffae2ca3c4673c9ac5f1e31a712a08ed1611bf4ff] <==
	I1025 09:53:46.280216       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:53:46.287848       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:53:46.287966       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:53:46.290148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:46.295482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:53:46.295735       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:53:46.295883       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6a6b94ce-f3db-45ae-b74d-db800648c1d4", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-880773_a2ebff9c-c226-4364-8124-1a4d0dec4dfc became leader
	I1025 09:53:46.296256       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-880773_a2ebff9c-c226-4364-8124-1a4d0dec4dfc!
	W1025 09:53:46.298873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:46.304062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:53:46.397440       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-880773_a2ebff9c-c226-4364-8124-1a4d0dec4dfc!
	W1025 09:53:48.308107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:48.312625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:50.316620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:50.321007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:52.323784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:52.330281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:54.333412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:54.337727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:56.341849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:56.345884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:58.349639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:53:58.355380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-880773 -n default-k8s-diff-port-880773
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-880773 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-846915 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-846915 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (285.826763ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:54:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-846915 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-846915 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-846915 describe deploy/metrics-server -n kube-system: exit status 1 (61.913572ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-846915 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-846915
helpers_test.go:243: (dbg) docker inspect embed-certs-846915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43",
	        "Created": "2025-10-25T09:53:45.12554821Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 441586,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:53:45.161095456Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43/hostname",
	        "HostsPath": "/var/lib/docker/containers/95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43/hosts",
	        "LogPath": "/var/lib/docker/containers/95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43/95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43-json.log",
	        "Name": "/embed-certs-846915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-846915:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-846915",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43",
	                "LowerDir": "/var/lib/docker/overlay2/a7f2291046bf28c8b06385afeace8def42aba64bb2a48f5f68cdc889aa5b8f12-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a7f2291046bf28c8b06385afeace8def42aba64bb2a48f5f68cdc889aa5b8f12/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a7f2291046bf28c8b06385afeace8def42aba64bb2a48f5f68cdc889aa5b8f12/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a7f2291046bf28c8b06385afeace8def42aba64bb2a48f5f68cdc889aa5b8f12/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-846915",
	                "Source": "/var/lib/docker/volumes/embed-certs-846915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-846915",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-846915",
	                "name.minikube.sigs.k8s.io": "embed-certs-846915",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "315dcb5e8c0d68d5927f3c0330c822ba9d06ada5496290a318365c624c584931",
	            "SandboxKey": "/var/run/docker/netns/315dcb5e8c0d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33235"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33236"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33239"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33237"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33238"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-846915": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:25:9b:a7:d3:3b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "727501f067496090ecad65be87558162f256d8c8235dc960e3b62d2c325f512b",
	                    "EndpointID": "7f0dccfde845114b73433fd3f9a426be3bea2ae5d7c7be735de741b682390036",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-846915",
	                        "95005cf1fe64"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-846915 -n embed-certs-846915
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-846915 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-846915 logs -n 25: (1.019910993s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p enable-default-cni-035825                                                                                                                                                                                                                  │ enable-default-cni-035825    │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	│ start   │ -p newest-cni-042675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-042675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ stop    │ -p newest-cni-042675 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-042675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p newest-cni-042675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-676314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ stop    │ -p old-k8s-version-676314 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ image   │ newest-cni-042675 image list --format=json                                                                                                                                                                                                    │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ pause   │ -p newest-cni-042675 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ delete  │ -p newest-cni-042675                                                                                                                                                                                                                          │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p no-preload-656799 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ delete  │ -p newest-cni-042675                                                                                                                                                                                                                          │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ delete  │ -p disable-driver-mounts-001549                                                                                                                                                                                                               │ disable-driver-mounts-001549 │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p embed-certs-846915 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ stop    │ -p no-preload-656799 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-676314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p old-k8s-version-676314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-880773 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-656799 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p no-preload-656799 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ stop    │ -p default-k8s-diff-port-880773 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-880773 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ start   │ -p default-k8s-diff-port-880773 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-846915 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:54:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:54:19.275788  449952 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:54:19.275916  449952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:19.275925  449952 out.go:374] Setting ErrFile to fd 2...
	I1025 09:54:19.275930  449952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:19.276131  449952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:54:19.276587  449952 out.go:368] Setting JSON to false
	I1025 09:54:19.278081  449952 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5803,"bootTime":1761380256,"procs":397,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:54:19.278181  449952 start.go:141] virtualization: kvm guest
	I1025 09:54:19.280051  449952 out.go:179] * [default-k8s-diff-port-880773] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:54:19.281403  449952 notify.go:220] Checking for updates...
	I1025 09:54:19.281428  449952 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:54:19.282722  449952 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:54:19.283928  449952 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:19.285222  449952 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:54:19.286379  449952 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:54:19.287745  449952 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:54:19.289294  449952 config.go:182] Loaded profile config "default-k8s-diff-port-880773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:19.289852  449952 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:54:19.314779  449952 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:54:19.314881  449952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:19.376455  449952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-25 09:54:19.36493292 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:19.376554  449952 docker.go:318] overlay module found
	I1025 09:54:19.377788  449952 out.go:179] * Using the docker driver based on existing profile
	I1025 09:54:19.378682  449952 start.go:305] selected driver: docker
	I1025 09:54:19.378698  449952 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-880773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:19.378796  449952 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:54:19.379365  449952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:19.439139  449952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-25 09:54:19.42844643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:19.439456  449952 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:54:19.439486  449952 cni.go:84] Creating CNI manager for ""
	I1025 09:54:19.439535  449952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:54:19.439596  449952 start.go:349] cluster config:
	{Name:default-k8s-diff-port-880773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:19.441502  449952 out.go:179] * Starting "default-k8s-diff-port-880773" primary control-plane node in "default-k8s-diff-port-880773" cluster
	I1025 09:54:19.442631  449952 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:54:19.443961  449952 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:54:19.445195  449952 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:54:19.445250  449952 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:54:19.445263  449952 cache.go:58] Caching tarball of preloaded images
	I1025 09:54:19.445295  449952 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:54:19.445383  449952 preload.go:233] Found /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:54:19.445399  449952 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:54:19.445551  449952 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/config.json ...
	I1025 09:54:19.469540  449952 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:54:19.469567  449952 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:54:19.469589  449952 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:54:19.469624  449952 start.go:360] acquireMachinesLock for default-k8s-diff-port-880773: {Name:mk083ef9abd9d3dbc7e696ddb5b045b01f4c2bf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:54:19.469696  449952 start.go:364] duration metric: took 50.424µs to acquireMachinesLock for "default-k8s-diff-port-880773"
	I1025 09:54:19.469720  449952 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:54:19.469728  449952 fix.go:54] fixHost starting: 
	I1025 09:54:19.470052  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:19.492315  449952 fix.go:112] recreateIfNeeded on default-k8s-diff-port-880773: state=Stopped err=<nil>
	W1025 09:54:19.492399  449952 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 09:54:15.475986  440020 node_ready.go:57] node "embed-certs-846915" has "Ready":"False" status (will retry)
	I1025 09:54:17.476904  440020 node_ready.go:49] node "embed-certs-846915" is "Ready"
	I1025 09:54:17.476939  440020 node_ready.go:38] duration metric: took 11.003723459s for node "embed-certs-846915" to be "Ready" ...
	I1025 09:54:17.476955  440020 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:54:17.477016  440020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:54:17.489612  440020 api_server.go:72] duration metric: took 11.446400559s to wait for apiserver process to appear ...
	I1025 09:54:17.489645  440020 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:54:17.489664  440020 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:54:17.495599  440020 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1025 09:54:17.496792  440020 api_server.go:141] control plane version: v1.34.1
	I1025 09:54:17.496826  440020 api_server.go:131] duration metric: took 7.172976ms to wait for apiserver health ...
	I1025 09:54:17.496835  440020 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:54:17.500516  440020 system_pods.go:59] 8 kube-system pods found
	I1025 09:54:17.500592  440020 system_pods.go:61] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:54:17.500600  440020 system_pods.go:61] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running
	I1025 09:54:17.500610  440020 system_pods.go:61] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:54:17.500613  440020 system_pods.go:61] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running
	I1025 09:54:17.500617  440020 system_pods.go:61] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running
	I1025 09:54:17.500620  440020 system_pods.go:61] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:54:17.500623  440020 system_pods.go:61] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running
	I1025 09:54:17.500627  440020 system_pods.go:61] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:54:17.500643  440020 system_pods.go:74] duration metric: took 3.795746ms to wait for pod list to return data ...
	I1025 09:54:17.500654  440020 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:54:17.503287  440020 default_sa.go:45] found service account: "default"
	I1025 09:54:17.503309  440020 default_sa.go:55] duration metric: took 2.649102ms for default service account to be created ...
	I1025 09:54:17.503319  440020 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:54:17.506326  440020 system_pods.go:86] 8 kube-system pods found
	I1025 09:54:17.506368  440020 system_pods.go:89] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:54:17.506374  440020 system_pods.go:89] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running
	I1025 09:54:17.506380  440020 system_pods.go:89] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:54:17.506390  440020 system_pods.go:89] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running
	I1025 09:54:17.506397  440020 system_pods.go:89] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running
	I1025 09:54:17.506400  440020 system_pods.go:89] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:54:17.506405  440020 system_pods.go:89] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running
	I1025 09:54:17.506410  440020 system_pods.go:89] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:54:17.506433  440020 retry.go:31] will retry after 188.876759ms: missing components: kube-dns
	I1025 09:54:17.700456  440020 system_pods.go:86] 8 kube-system pods found
	I1025 09:54:17.700546  440020 system_pods.go:89] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:54:17.700558  440020 system_pods.go:89] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running
	I1025 09:54:17.700568  440020 system_pods.go:89] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:54:17.700582  440020 system_pods.go:89] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running
	I1025 09:54:17.700588  440020 system_pods.go:89] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running
	I1025 09:54:17.700593  440020 system_pods.go:89] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:54:17.700599  440020 system_pods.go:89] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running
	I1025 09:54:17.700612  440020 system_pods.go:89] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:54:17.700632  440020 retry.go:31] will retry after 250.335068ms: missing components: kube-dns
	I1025 09:54:17.955256  440020 system_pods.go:86] 8 kube-system pods found
	I1025 09:54:17.955289  440020 system_pods.go:89] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Running
	I1025 09:54:17.955295  440020 system_pods.go:89] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running
	I1025 09:54:17.955298  440020 system_pods.go:89] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:54:17.955302  440020 system_pods.go:89] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running
	I1025 09:54:17.955307  440020 system_pods.go:89] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running
	I1025 09:54:17.955311  440020 system_pods.go:89] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:54:17.955314  440020 system_pods.go:89] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running
	I1025 09:54:17.955317  440020 system_pods.go:89] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Running
	I1025 09:54:17.955324  440020 system_pods.go:126] duration metric: took 451.999845ms to wait for k8s-apps to be running ...
	I1025 09:54:17.955332  440020 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:54:17.955420  440020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:54:17.970053  440020 system_svc.go:56] duration metric: took 14.706919ms WaitForService to wait for kubelet
	I1025 09:54:17.970086  440020 kubeadm.go:586] duration metric: took 11.926881356s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:54:17.970111  440020 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:54:17.973494  440020 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:54:17.973526  440020 node_conditions.go:123] node cpu capacity is 8
	I1025 09:54:17.973543  440020 node_conditions.go:105] duration metric: took 3.426431ms to run NodePressure ...
	I1025 09:54:17.973558  440020 start.go:241] waiting for startup goroutines ...
	I1025 09:54:17.973567  440020 start.go:246] waiting for cluster config update ...
	I1025 09:54:17.973582  440020 start.go:255] writing updated cluster config ...
	I1025 09:54:17.973852  440020 ssh_runner.go:195] Run: rm -f paused
	I1025 09:54:17.978265  440020 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:54:17.982758  440020 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4w68k" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:17.987122  440020 pod_ready.go:94] pod "coredns-66bc5c9577-4w68k" is "Ready"
	I1025 09:54:17.987148  440020 pod_ready.go:86] duration metric: took 4.365303ms for pod "coredns-66bc5c9577-4w68k" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:17.989310  440020 pod_ready.go:83] waiting for pod "etcd-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:17.993594  440020 pod_ready.go:94] pod "etcd-embed-certs-846915" is "Ready"
	I1025 09:54:17.993619  440020 pod_ready.go:86] duration metric: took 4.284136ms for pod "etcd-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:17.995810  440020 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:17.999546  440020 pod_ready.go:94] pod "kube-apiserver-embed-certs-846915" is "Ready"
	I1025 09:54:17.999606  440020 pod_ready.go:86] duration metric: took 3.774304ms for pod "kube-apiserver-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:18.001621  440020 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:18.384665  440020 pod_ready.go:94] pod "kube-controller-manager-embed-certs-846915" is "Ready"
	I1025 09:54:18.384701  440020 pod_ready.go:86] duration metric: took 383.060784ms for pod "kube-controller-manager-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:18.583914  440020 pod_ready.go:83] waiting for pod "kube-proxy-kfqqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:18.982945  440020 pod_ready.go:94] pod "kube-proxy-kfqqh" is "Ready"
	I1025 09:54:18.982973  440020 pod_ready.go:86] duration metric: took 399.034255ms for pod "kube-proxy-kfqqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:19.184109  440020 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:19.584000  440020 pod_ready.go:94] pod "kube-scheduler-embed-certs-846915" is "Ready"
	I1025 09:54:19.584035  440020 pod_ready.go:86] duration metric: took 399.892029ms for pod "kube-scheduler-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:19.584051  440020 pod_ready.go:40] duration metric: took 1.605758265s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:54:19.650747  440020 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:54:19.652803  440020 out.go:179] * Done! kubectl is now configured to use "embed-certs-846915" cluster and "default" namespace by default
	W1025 09:54:16.068318  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	W1025 09:54:18.567974  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	I1025 09:54:18.301621  445741 pod_ready.go:94] pod "coredns-66bc5c9577-sw9hv" is "Ready"
	I1025 09:54:18.301648  445741 pod_ready.go:86] duration metric: took 9.506322482s for pod "coredns-66bc5c9577-sw9hv" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:18.304547  445741 pod_ready.go:83] waiting for pod "etcd-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:54:20.312171  445741 pod_ready.go:104] pod "etcd-no-preload-656799" is not "Ready", error: <nil>
	I1025 09:54:21.809723  445741 pod_ready.go:94] pod "etcd-no-preload-656799" is "Ready"
	I1025 09:54:21.809749  445741 pod_ready.go:86] duration metric: took 3.505178884s for pod "etcd-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:21.812231  445741 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:21.816695  445741 pod_ready.go:94] pod "kube-apiserver-no-preload-656799" is "Ready"
	I1025 09:54:21.816722  445741 pod_ready.go:86] duration metric: took 4.466508ms for pod "kube-apiserver-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:21.819011  445741 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:21.823589  445741 pod_ready.go:94] pod "kube-controller-manager-no-preload-656799" is "Ready"
	I1025 09:54:21.823628  445741 pod_ready.go:86] duration metric: took 4.593239ms for pod "kube-controller-manager-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:21.825939  445741 pod_ready.go:83] waiting for pod "kube-proxy-gfph2" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:22.010836  445741 pod_ready.go:94] pod "kube-proxy-gfph2" is "Ready"
	I1025 09:54:22.010862  445741 pod_ready.go:86] duration metric: took 184.902324ms for pod "kube-proxy-gfph2" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:22.210739  445741 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:22.608665  445741 pod_ready.go:94] pod "kube-scheduler-no-preload-656799" is "Ready"
	I1025 09:54:22.608695  445741 pod_ready.go:86] duration metric: took 397.92747ms for pod "kube-scheduler-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:22.608710  445741 pod_ready.go:40] duration metric: took 13.818887723s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:54:22.670288  445741 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:54:22.672465  445741 out.go:179] * Done! kubectl is now configured to use "no-preload-656799" cluster and "default" namespace by default
	I1025 09:54:19.494507  449952 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-880773" ...
	I1025 09:54:19.494587  449952 cli_runner.go:164] Run: docker start default-k8s-diff-port-880773
	I1025 09:54:19.824726  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:19.851116  449952 kic.go:430] container "default-k8s-diff-port-880773" state is running.
	I1025 09:54:19.851830  449952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-880773
	I1025 09:54:19.874663  449952 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/config.json ...
	I1025 09:54:19.874958  449952 machine.go:93] provisionDockerMachine start ...
	I1025 09:54:19.875036  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:19.900142  449952 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:19.900490  449952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33250 <nil> <nil>}
	I1025 09:54:19.900509  449952 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:54:19.901160  449952 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54890->127.0.0.1:33250: read: connection reset by peer
	I1025 09:54:23.064068  449952 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-880773
	
	I1025 09:54:23.064110  449952 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-880773"
	I1025 09:54:23.064192  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:23.086772  449952 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:23.087065  449952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33250 <nil> <nil>}
	I1025 09:54:23.087087  449952 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-880773 && echo "default-k8s-diff-port-880773" | sudo tee /etc/hostname
	I1025 09:54:23.252426  449952 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-880773
	
	I1025 09:54:23.252521  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:23.273044  449952 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:23.273316  449952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33250 <nil> <nil>}
	I1025 09:54:23.273335  449952 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-880773' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-880773/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-880773' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:54:23.424572  449952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:54:23.424603  449952 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 09:54:23.424629  449952 ubuntu.go:190] setting up certificates
	I1025 09:54:23.424642  449952 provision.go:84] configureAuth start
	I1025 09:54:23.424716  449952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-880773
	I1025 09:54:23.447850  449952 provision.go:143] copyHostCerts
	I1025 09:54:23.447922  449952 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem, removing ...
	I1025 09:54:23.447939  449952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:54:23.448010  449952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 09:54:23.448121  449952 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem, removing ...
	I1025 09:54:23.448133  449952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:54:23.448172  449952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 09:54:23.448307  449952 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem, removing ...
	I1025 09:54:23.448322  449952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:54:23.448386  449952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 09:54:23.448466  449952 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-880773 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-880773 localhost minikube]
	I1025 09:54:23.670392  449952 provision.go:177] copyRemoteCerts
	I1025 09:54:23.670473  449952 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:54:23.670534  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:23.695861  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:23.810003  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:54:23.831919  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1025 09:54:23.855020  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:54:23.876651  449952 provision.go:87] duration metric: took 451.986685ms to configureAuth
	I1025 09:54:23.876682  449952 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:54:23.876901  449952 config.go:182] Loaded profile config "default-k8s-diff-port-880773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:23.877015  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:23.898381  449952 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:23.898653  449952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33250 <nil> <nil>}
	I1025 09:54:23.898684  449952 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1025 09:54:20.568510  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	W1025 09:54:22.569444  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	W1025 09:54:25.068911  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	I1025 09:54:24.748214  449952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:54:24.748254  449952 machine.go:96] duration metric: took 4.873275374s to provisionDockerMachine
	I1025 09:54:24.748278  449952 start.go:293] postStartSetup for "default-k8s-diff-port-880773" (driver="docker")
	I1025 09:54:24.748293  449952 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:54:24.748387  449952 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:54:24.748520  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:24.768940  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:24.873795  449952 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:54:24.877543  449952 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:54:24.877575  449952 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:54:24.877589  449952 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 09:54:24.877661  449952 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 09:54:24.877782  449952 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> 1341452.pem in /etc/ssl/certs
	I1025 09:54:24.877958  449952 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:54:24.887735  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:54:24.906567  449952 start.go:296] duration metric: took 158.269737ms for postStartSetup
	I1025 09:54:24.906638  449952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:54:24.906671  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:24.925060  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:25.024684  449952 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:54:25.029312  449952 fix.go:56] duration metric: took 5.559580439s for fixHost
	I1025 09:54:25.029335  449952 start.go:83] releasing machines lock for "default-k8s-diff-port-880773", held for 5.559626356s
	I1025 09:54:25.029412  449952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-880773
	I1025 09:54:25.053651  449952 ssh_runner.go:195] Run: cat /version.json
	I1025 09:54:25.053671  449952 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:54:25.053710  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:25.053740  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:25.076792  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:25.077574  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:25.177839  449952 ssh_runner.go:195] Run: systemctl --version
	I1025 09:54:25.232420  449952 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:54:25.269857  449952 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:54:25.274931  449952 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:54:25.275022  449952 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:54:25.283809  449952 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:54:25.283844  449952 start.go:495] detecting cgroup driver to use...
	I1025 09:54:25.283873  449952 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:54:25.283907  449952 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:54:25.298715  449952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:54:25.311114  449952 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:54:25.311179  449952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:54:25.326245  449952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:54:25.338983  449952 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:54:25.421886  449952 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:54:25.507785  449952 docker.go:234] disabling docker service ...
	I1025 09:54:25.507851  449952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:54:25.522758  449952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:54:25.535545  449952 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:54:25.624987  449952 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:54:25.708591  449952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:54:25.721462  449952 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:54:25.736203  449952 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:54:25.736286  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.745513  449952 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:54:25.745572  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.754426  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.763537  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.772424  449952 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:54:25.780767  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.789663  449952 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.798468  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.807406  449952 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:54:25.815004  449952 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:54:25.822998  449952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:25.903676  449952 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:54:26.020906  449952 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:54:26.020973  449952 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:54:26.025150  449952 start.go:563] Will wait 60s for crictl version
	I1025 09:54:26.025208  449952 ssh_runner.go:195] Run: which crictl
	I1025 09:54:26.029013  449952 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:54:26.057753  449952 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:54:26.057819  449952 ssh_runner.go:195] Run: crio --version
	I1025 09:54:26.086687  449952 ssh_runner.go:195] Run: crio --version
	I1025 09:54:26.116337  449952 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:54:26.117443  449952 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-880773 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:54:26.135714  449952 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1025 09:54:26.140427  449952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:54:26.154403  449952 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-880773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:54:26.154570  449952 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:54:26.154635  449952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:54:26.192928  449952 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:54:26.192961  449952 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:54:26.193024  449952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:54:26.221578  449952 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:54:26.221602  449952 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:54:26.221611  449952 kubeadm.go:934] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1025 09:54:26.221708  449952 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-880773 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:54:26.221767  449952 ssh_runner.go:195] Run: crio config
	I1025 09:54:26.266519  449952 cni.go:84] Creating CNI manager for ""
	I1025 09:54:26.266551  449952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:54:26.266577  449952 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:54:26.266705  449952 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-880773 NodeName:default-k8s-diff-port-880773 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:54:26.266942  449952 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-880773"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:54:26.267030  449952 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:54:26.276099  449952 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:54:26.276158  449952 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:54:26.283856  449952 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1025 09:54:26.296736  449952 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:54:26.309600  449952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1025 09:54:26.322267  449952 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:54:26.325950  449952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:54:26.336085  449952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:26.418603  449952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:54:26.445329  449952 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773 for IP: 192.168.94.2
	I1025 09:54:26.445370  449952 certs.go:195] generating shared ca certs ...
	I1025 09:54:26.445391  449952 certs.go:227] acquiring lock for ca certs: {Name:mk84f00dc0ba6e3a6eb84ff47b0ea60692217fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:26.445589  449952 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key
	I1025 09:54:26.445651  449952 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key
	I1025 09:54:26.445663  449952 certs.go:257] generating profile certs ...
	I1025 09:54:26.445763  449952 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/client.key
	I1025 09:54:26.445836  449952 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.key.bf049977
	I1025 09:54:26.445889  449952 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/proxy-client.key
	I1025 09:54:26.446021  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem (1338 bytes)
	W1025 09:54:26.446059  449952 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145_empty.pem, impossibly tiny 0 bytes
	I1025 09:54:26.446071  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:54:26.446100  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:54:26.446130  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:54:26.446159  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem (1675 bytes)
	I1025 09:54:26.446208  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:54:26.447082  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:54:26.467801  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:54:26.487512  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:54:26.507419  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:54:26.531864  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 09:54:26.550342  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:54:26.569273  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:54:26.587593  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:54:26.605286  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /usr/share/ca-certificates/1341452.pem (1708 bytes)
	I1025 09:54:26.623801  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:54:26.642803  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem --> /usr/share/ca-certificates/134145.pem (1338 bytes)
	I1025 09:54:26.660752  449952 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:54:26.674006  449952 ssh_runner.go:195] Run: openssl version
	I1025 09:54:26.680368  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:54:26.689226  449952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:26.693134  449952 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:59 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:26.693180  449952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:26.728010  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:54:26.736810  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134145.pem && ln -fs /usr/share/ca-certificates/134145.pem /etc/ssl/certs/134145.pem"
	I1025 09:54:26.746043  449952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134145.pem
	I1025 09:54:26.749893  449952 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:05 /usr/share/ca-certificates/134145.pem
	I1025 09:54:26.749943  449952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134145.pem
	I1025 09:54:26.785153  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134145.pem /etc/ssl/certs/51391683.0"
	I1025 09:54:26.794063  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1341452.pem && ln -fs /usr/share/ca-certificates/1341452.pem /etc/ssl/certs/1341452.pem"
	I1025 09:54:26.802929  449952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1341452.pem
	I1025 09:54:26.807038  449952 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:05 /usr/share/ca-certificates/1341452.pem
	I1025 09:54:26.807101  449952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1341452.pem
	I1025 09:54:26.844046  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1341452.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:54:26.852738  449952 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:54:26.856516  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:54:26.892058  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:54:26.928987  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:54:26.978149  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:54:27.021912  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:54:27.075255  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:54:27.132302  449952 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-880773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:27.132461  449952 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:54:27.132541  449952 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:54:27.166099  449952 cri.go:89] found id: "8a40c304121945c99334f375a4fc8f1073390b82cca6a44c6e2b224a5804ed43"
	I1025 09:54:27.166122  449952 cri.go:89] found id: "1099e940dc59e4a7fc6edf4f82c427fc4633cbc73d1759f0ef430fccd002219f"
	I1025 09:54:27.166131  449952 cri.go:89] found id: "b7360eb6624b8284557553c607130a8087e3690512dcc9caea4351f9f876fd02"
	I1025 09:54:27.166136  449952 cri.go:89] found id: "9a7e2aef555d4452a0b73ff6d39e556aaf40affe43c7adcaf8fc119b3910c298"
	I1025 09:54:27.166141  449952 cri.go:89] found id: ""
	I1025 09:54:27.166194  449952 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:54:27.179061  449952 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:54:27Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:54:27.179160  449952 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:54:27.188157  449952 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:54:27.188180  449952 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:54:27.188228  449952 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:54:27.196153  449952 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:54:27.197499  449952 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-880773" does not appear in /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:27.198480  449952 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-130604/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-880773" cluster setting kubeconfig missing "default-k8s-diff-port-880773" context setting]
	I1025 09:54:27.199935  449952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:27.202256  449952 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:54:27.210782  449952 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1025 09:54:27.210819  449952 kubeadm.go:601] duration metric: took 22.632727ms to restartPrimaryControlPlane
	I1025 09:54:27.210865  449952 kubeadm.go:402] duration metric: took 78.655845ms to StartCluster
	I1025 09:54:27.210883  449952 settings.go:142] acquiring lock: {Name:mke1e64be0ec6edf2eef6e52eb10d83b59bb8c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:27.210942  449952 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:27.213436  449952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:27.213678  449952 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:54:27.213737  449952 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:54:27.213844  449952 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-880773"
	I1025 09:54:27.213859  449952 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-880773"
	I1025 09:54:27.213875  449952 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-880773"
	I1025 09:54:27.213886  449952 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-880773"
	I1025 09:54:27.213891  449952 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-880773"
	W1025 09:54:27.213898  449952 addons.go:247] addon dashboard should already be in state true
	I1025 09:54:27.213936  449952 host.go:66] Checking if "default-k8s-diff-port-880773" exists ...
	I1025 09:54:27.213939  449952 config.go:182] Loaded profile config "default-k8s-diff-port-880773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:27.213866  449952 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-880773"
	W1025 09:54:27.214066  449952 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:54:27.214095  449952 host.go:66] Checking if "default-k8s-diff-port-880773" exists ...
	I1025 09:54:27.214261  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:27.214456  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:27.214610  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:27.216018  449952 out.go:179] * Verifying Kubernetes components...
	I1025 09:54:27.217234  449952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:27.239708  449952 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-880773"
	W1025 09:54:27.239738  449952 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:54:27.239770  449952 host.go:66] Checking if "default-k8s-diff-port-880773" exists ...
	I1025 09:54:27.240253  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:27.242481  449952 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 09:54:27.242489  449952 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:54:27.243627  449952 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:54:27.243645  449952 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 09:54:27.243651  449952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:54:27.243712  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:27.247468  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 09:54:27.247486  449952 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 09:54:27.247539  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:27.267591  449952 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:54:27.267622  449952 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:54:27.267686  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:27.276575  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:27.285081  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:27.298498  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:27.368890  449952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:54:27.383755  449952 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-880773" to be "Ready" ...
	I1025 09:54:27.395977  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 09:54:27.396003  449952 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 09:54:27.406130  449952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:54:27.411552  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 09:54:27.411662  449952 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 09:54:27.419928  449952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:54:27.427159  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 09:54:27.427182  449952 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 09:54:27.446072  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 09:54:27.446100  449952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 09:54:27.471003  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 09:54:27.471033  449952 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 09:54:27.488999  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 09:54:27.489025  449952 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 09:54:27.503088  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 09:54:27.503113  449952 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 09:54:27.517184  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 09:54:27.517212  449952 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 09:54:27.530517  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:54:27.530540  449952 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 09:54:27.545962  449952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:54:29.018628  449952 node_ready.go:49] node "default-k8s-diff-port-880773" is "Ready"
	I1025 09:54:29.018668  449952 node_ready.go:38] duration metric: took 1.634880084s for node "default-k8s-diff-port-880773" to be "Ready" ...
	I1025 09:54:29.018686  449952 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:54:29.018740  449952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:54:29.506034  449952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.099869063s)
	I1025 09:54:29.506102  449952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.086134972s)
	I1025 09:54:29.506180  449952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.960181276s)
	I1025 09:54:29.506238  449952 api_server.go:72] duration metric: took 2.292529535s to wait for apiserver process to appear ...
	I1025 09:54:29.506289  449952 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:54:29.506306  449952 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1025 09:54:29.507716  449952 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-880773 addons enable metrics-server
	
	I1025 09:54:29.513028  449952 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:54:29.513055  449952 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:54:29.514792  449952 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1025 09:54:27.071249  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	W1025 09:54:29.568141  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 25 09:54:17 embed-certs-846915 crio[773]: time="2025-10-25T09:54:17.620022009Z" level=info msg="Starting container: 25d8f7439c371c1848b9ac24c0040a9d820379019372e23be137ccbf1cd1a25f" id=e0b08f85-7a54-4143-9503-5f0f68b29ec7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:54:17 embed-certs-846915 crio[773]: time="2025-10-25T09:54:17.621766801Z" level=info msg="Started container" PID=1838 containerID=25d8f7439c371c1848b9ac24c0040a9d820379019372e23be137ccbf1cd1a25f description=kube-system/coredns-66bc5c9577-4w68k/coredns id=e0b08f85-7a54-4143-9503-5f0f68b29ec7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9d343a1a2921611012b154f2ee657b210425f06fc24a4e0a9b14c08390884c09
	Oct 25 09:54:20 embed-certs-846915 crio[773]: time="2025-10-25T09:54:20.14382628Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3adc5ba6-2969-4573-bdb8-a22b7e859083 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:54:20 embed-certs-846915 crio[773]: time="2025-10-25T09:54:20.143934814Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:20 embed-certs-846915 crio[773]: time="2025-10-25T09:54:20.150667024Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7b1f62fb21baf3ead89a2904772709e862048edd2bd3aea0f084e57bc9611d19 UID:7deecc20-1509-4c22-90d3-ebbe7e9e363f NetNS:/var/run/netns/9538d5b9-acf7-492b-bd56-0240807d7bef Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005182b8}] Aliases:map[]}"
	Oct 25 09:54:20 embed-certs-846915 crio[773]: time="2025-10-25T09:54:20.150695821Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:54:20 embed-certs-846915 crio[773]: time="2025-10-25T09:54:20.171153627Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7b1f62fb21baf3ead89a2904772709e862048edd2bd3aea0f084e57bc9611d19 UID:7deecc20-1509-4c22-90d3-ebbe7e9e363f NetNS:/var/run/netns/9538d5b9-acf7-492b-bd56-0240807d7bef Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005182b8}] Aliases:map[]}"
	Oct 25 09:54:20 embed-certs-846915 crio[773]: time="2025-10-25T09:54:20.171518621Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 09:54:20 embed-certs-846915 crio[773]: time="2025-10-25T09:54:20.173308114Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 25 09:54:20 embed-certs-846915 crio[773]: time="2025-10-25T09:54:20.174420066Z" level=info msg="Ran pod sandbox 7b1f62fb21baf3ead89a2904772709e862048edd2bd3aea0f084e57bc9611d19 with infra container: default/busybox/POD" id=3adc5ba6-2969-4573-bdb8-a22b7e859083 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:54:20 embed-certs-846915 crio[773]: time="2025-10-25T09:54:20.176189149Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b9077185-fce4-4c27-816e-d830eeb7ab44 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:20 embed-certs-846915 crio[773]: time="2025-10-25T09:54:20.176368021Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b9077185-fce4-4c27-816e-d830eeb7ab44 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:20 embed-certs-846915 crio[773]: time="2025-10-25T09:54:20.176415129Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b9077185-fce4-4c27-816e-d830eeb7ab44 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:20 embed-certs-846915 crio[773]: time="2025-10-25T09:54:20.177308852Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f3d840bf-2ecf-4d32-881f-8bebc41718a3 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:54:20 embed-certs-846915 crio[773]: time="2025-10-25T09:54:20.179339815Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 09:54:22 embed-certs-846915 crio[773]: time="2025-10-25T09:54:22.315132625Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f3d840bf-2ecf-4d32-881f-8bebc41718a3 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:54:22 embed-certs-846915 crio[773]: time="2025-10-25T09:54:22.316053083Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eed8d6c3-1ef3-440c-a3f7-5a238e368511 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:22 embed-certs-846915 crio[773]: time="2025-10-25T09:54:22.318006741Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e38e96ce-8042-4dc6-82c1-f594ca29dbc5 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:22 embed-certs-846915 crio[773]: time="2025-10-25T09:54:22.325214803Z" level=info msg="Creating container: default/busybox/busybox" id=91c3aedd-a150-463e-bbea-25efeaff3e69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:22 embed-certs-846915 crio[773]: time="2025-10-25T09:54:22.325369691Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:22 embed-certs-846915 crio[773]: time="2025-10-25T09:54:22.330241277Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:22 embed-certs-846915 crio[773]: time="2025-10-25T09:54:22.33079637Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:22 embed-certs-846915 crio[773]: time="2025-10-25T09:54:22.359056164Z" level=info msg="Created container 9d954204c5edf14f9bf6a623439c2a782900188887cb855f0fd80a8826127c33: default/busybox/busybox" id=91c3aedd-a150-463e-bbea-25efeaff3e69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:22 embed-certs-846915 crio[773]: time="2025-10-25T09:54:22.359874782Z" level=info msg="Starting container: 9d954204c5edf14f9bf6a623439c2a782900188887cb855f0fd80a8826127c33" id=b60e0a4e-0723-4ba9-a84b-0c8d0cf78821 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:54:22 embed-certs-846915 crio[773]: time="2025-10-25T09:54:22.362593109Z" level=info msg="Started container" PID=1915 containerID=9d954204c5edf14f9bf6a623439c2a782900188887cb855f0fd80a8826127c33 description=default/busybox/busybox id=b60e0a4e-0723-4ba9-a84b-0c8d0cf78821 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7b1f62fb21baf3ead89a2904772709e862048edd2bd3aea0f084e57bc9611d19
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	9d954204c5edf       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   7b1f62fb21baf       busybox                                      default
	25d8f7439c371       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   9d343a1a29216       coredns-66bc5c9577-4w68k                     kube-system
	9c87582b37d9a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   ab131c8b6f368       storage-provisioner                          kube-system
	d014697929358       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   963b0b65f9d3b       kube-proxy-kfqqh                             kube-system
	24ea785769d2a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      24 seconds ago      Running             kindnet-cni               0                   882ee8120f3d7       kindnet-khx5l                                kube-system
	4e531ba565086       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   fa552ccc022ec       etcd-embed-certs-846915                      kube-system
	926990adfc077       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   b2370d77861e3       kube-controller-manager-embed-certs-846915   kube-system
	23a5495124310       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   f538f3cc6868d       kube-apiserver-embed-certs-846915            kube-system
	53793773b0a7b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   3beb5a85a08cb       kube-scheduler-embed-certs-846915            kube-system
	
	
	==> coredns [25d8f7439c371c1848b9ac24c0040a9d820379019372e23be137ccbf1cd1a25f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57537 - 19623 "HINFO IN 8766734528305641195.3925368578378104957. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020916247s
	
	
	==> describe nodes <==
	Name:               embed-certs-846915
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-846915
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=embed-certs-846915
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_54_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:53:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-846915
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:54:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:54:17 +0000   Sat, 25 Oct 2025 09:53:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:54:17 +0000   Sat, 25 Oct 2025 09:53:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:54:17 +0000   Sat, 25 Oct 2025 09:53:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:54:17 +0000   Sat, 25 Oct 2025 09:54:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-846915
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                7759893b-5ad2-4235-8596-bf7be856684a
	  Boot ID:                    69cac88c-fbae-449a-9884-8eb99653f5b9
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-4w68k                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-846915                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-khx5l                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-embed-certs-846915             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-embed-certs-846915    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-kfqqh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-embed-certs-846915             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node embed-certs-846915 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node embed-certs-846915 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x8 over 35s)  kubelet          Node embed-certs-846915 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node embed-certs-846915 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node embed-certs-846915 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node embed-certs-846915 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node embed-certs-846915 event: Registered Node embed-certs-846915 in Controller
	  Normal  NodeReady                14s                kubelet          Node embed-certs-846915 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000024] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[Oct25 09:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[ +17.952906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 b8 8e e3 56 c9 08 06
	[  +0.000656] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[Oct25 09:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	[ +20.335832] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +1.293644] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[Oct25 09:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 68 92 7c c6 14 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +0.270958] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a d0 7b 0e 4a 8d 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[ +10.676024] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000020] ll header: 00000000: ff ff ff ff ff ff 1a 10 31 a9 02 ae 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	
	
	==> etcd [4e531ba565086bf54a3d5d33e7bdcd60e090bf647452e953dea416b68ba2a06e] <==
	{"level":"warn","ts":"2025-10-25T09:53:57.324317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.330482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.337601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.347426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.355021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.362710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.370459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.376986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.384403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.391642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.398671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.408058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.416770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.430712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.437337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.454472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.462238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.471078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.480176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.489938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.496217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.511781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.521520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.530178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:53:57.585994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39274","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:54:31 up  1:36,  0 user,  load average: 5.78, 4.59, 2.80
	Linux embed-certs-846915 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [24ea785769d2aed6170823266cba3c6100ddae3ac04c63c9fb55355fff7c4eb7] <==
	I1025 09:54:06.850639       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:54:06.850982       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1025 09:54:06.851148       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:54:06.851169       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:54:06.851191       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:54:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:54:07.149371       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:54:07.149409       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:54:07.149422       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:54:07.149678       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:54:07.449971       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:54:07.450104       1 metrics.go:72] Registering metrics
	I1025 09:54:07.450268       1 controller.go:711] "Syncing nftables rules"
	I1025 09:54:17.149991       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:54:17.150057       1 main.go:301] handling current node
	I1025 09:54:27.151514       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:54:27.151565       1 main.go:301] handling current node
	
	
	==> kube-apiserver [23a549512431047cbbd6fb689d3704b87afd8d70bae447a31e1d7232963586cf] <==
	I1025 09:53:58.307779       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:53:58.318426       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 09:53:58.318499       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:53:58.323921       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:53:58.324099       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:53:58.324126       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 09:53:58.421703       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:53:59.113737       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 09:53:59.119533       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 09:53:59.119557       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:53:59.649038       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:53:59.689060       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:53:59.815026       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 09:53:59.820767       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1025 09:53:59.821979       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:53:59.826332       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:54:00.243654       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:54:00.991229       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:54:01.000782       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 09:54:01.009266       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:54:05.698526       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:54:05.705492       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:54:06.200941       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1025 09:54:06.269602       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1025 09:54:29.957495       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:36224: use of closed network connection
	
	
	==> kube-controller-manager [926990adfc077a02c92075cf75b66698d9625e91df652f4b73440979570c5a25] <==
	I1025 09:54:05.242422       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:54:05.242892       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:54:05.242917       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:54:05.242968       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:54:05.243044       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:54:05.243051       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 09:54:05.243197       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:54:05.243208       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:54:05.243950       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:54:05.244073       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:54:05.244281       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:54:05.244315       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:54:05.244329       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:54:05.244342       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:54:05.244404       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:54:05.246910       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:54:05.251481       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:54:05.262685       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:54:05.275135       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:54:05.275281       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:54:05.275429       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-846915"
	I1025 09:54:05.275496       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:54:05.293769       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1025 09:54:06.524626       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1025 09:54:20.277468       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d01469792935897d45f97b948dba435c72875278b32f5cf8ea2544bf2cf1c8dd] <==
	I1025 09:54:06.682919       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:54:06.755885       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:54:06.856720       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:54:06.856789       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1025 09:54:06.856882       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:54:06.884949       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:54:06.885011       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:54:06.891011       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:54:06.891967       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:54:06.892013       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:54:06.895160       1 config.go:200] "Starting service config controller"
	I1025 09:54:06.895181       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:54:06.895229       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:54:06.895234       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:54:06.895249       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:54:06.895255       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:54:06.895528       1 config.go:309] "Starting node config controller"
	I1025 09:54:06.895540       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:54:06.895547       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:54:06.996250       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:54:06.996275       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:54:06.996288       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [53793773b0a7b95125797c1e1cc4b6084af5c074db5cbf5478f4c408da5ec157] <==
	E1025 09:53:58.197395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:53:58.197554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:53:58.197626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:53:58.197717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:53:58.197766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:53:58.197789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:53:58.198044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:53:58.198056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:53:58.198092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:53:59.034666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:53:59.134812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:53:59.145020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:53:59.194870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:53:59.230021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:53:59.290388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:53:59.292429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:53:59.302196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:53:59.356668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:53:59.365840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:53:59.374147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:53:59.402621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:53:59.441984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:53:59.441993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:53:59.634366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1025 09:54:02.192318       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:54:01 embed-certs-846915 kubelet[1324]: E1025 09:54:01.839284    1324 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-embed-certs-846915\" already exists" pod="kube-system/etcd-embed-certs-846915"
	Oct 25 09:54:01 embed-certs-846915 kubelet[1324]: I1025 09:54:01.866205    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-846915" podStartSLOduration=1.866182502 podStartE2EDuration="1.866182502s" podCreationTimestamp="2025-10-25 09:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:54:01.854023417 +0000 UTC m=+1.126273294" watchObservedRunningTime="2025-10-25 09:54:01.866182502 +0000 UTC m=+1.138432376"
	Oct 25 09:54:01 embed-certs-846915 kubelet[1324]: I1025 09:54:01.877552    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-846915" podStartSLOduration=1.877525707 podStartE2EDuration="1.877525707s" podCreationTimestamp="2025-10-25 09:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:54:01.866111419 +0000 UTC m=+1.138361320" watchObservedRunningTime="2025-10-25 09:54:01.877525707 +0000 UTC m=+1.149775585"
	Oct 25 09:54:01 embed-certs-846915 kubelet[1324]: I1025 09:54:01.890066    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-846915" podStartSLOduration=1.890044569 podStartE2EDuration="1.890044569s" podCreationTimestamp="2025-10-25 09:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:54:01.87772041 +0000 UTC m=+1.149970278" watchObservedRunningTime="2025-10-25 09:54:01.890044569 +0000 UTC m=+1.162294446"
	Oct 25 09:54:01 embed-certs-846915 kubelet[1324]: I1025 09:54:01.905307    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-846915" podStartSLOduration=1.905279893 podStartE2EDuration="1.905279893s" podCreationTimestamp="2025-10-25 09:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:54:01.890251348 +0000 UTC m=+1.162501226" watchObservedRunningTime="2025-10-25 09:54:01.905279893 +0000 UTC m=+1.177529771"
	Oct 25 09:54:05 embed-certs-846915 kubelet[1324]: I1025 09:54:05.237197    1324 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 09:54:05 embed-certs-846915 kubelet[1324]: I1025 09:54:05.238012    1324 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 09:54:06 embed-certs-846915 kubelet[1324]: I1025 09:54:06.328866    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/333a7d45-8903-4f7d-a7be-87cb28de77fa-cni-cfg\") pod \"kindnet-khx5l\" (UID: \"333a7d45-8903-4f7d-a7be-87cb28de77fa\") " pod="kube-system/kindnet-khx5l"
	Oct 25 09:54:06 embed-certs-846915 kubelet[1324]: I1025 09:54:06.328921    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9qf9\" (UniqueName: \"kubernetes.io/projected/333a7d45-8903-4f7d-a7be-87cb28de77fa-kube-api-access-w9qf9\") pod \"kindnet-khx5l\" (UID: \"333a7d45-8903-4f7d-a7be-87cb28de77fa\") " pod="kube-system/kindnet-khx5l"
	Oct 25 09:54:06 embed-certs-846915 kubelet[1324]: I1025 09:54:06.328954    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1ff535da-325f-4c85-a30a-d044753b2895-kube-proxy\") pod \"kube-proxy-kfqqh\" (UID: \"1ff535da-325f-4c85-a30a-d044753b2895\") " pod="kube-system/kube-proxy-kfqqh"
	Oct 25 09:54:06 embed-certs-846915 kubelet[1324]: I1025 09:54:06.328986    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ff535da-325f-4c85-a30a-d044753b2895-lib-modules\") pod \"kube-proxy-kfqqh\" (UID: \"1ff535da-325f-4c85-a30a-d044753b2895\") " pod="kube-system/kube-proxy-kfqqh"
	Oct 25 09:54:06 embed-certs-846915 kubelet[1324]: I1025 09:54:06.329010    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvq69\" (UniqueName: \"kubernetes.io/projected/1ff535da-325f-4c85-a30a-d044753b2895-kube-api-access-nvq69\") pod \"kube-proxy-kfqqh\" (UID: \"1ff535da-325f-4c85-a30a-d044753b2895\") " pod="kube-system/kube-proxy-kfqqh"
	Oct 25 09:54:06 embed-certs-846915 kubelet[1324]: I1025 09:54:06.329033    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/333a7d45-8903-4f7d-a7be-87cb28de77fa-lib-modules\") pod \"kindnet-khx5l\" (UID: \"333a7d45-8903-4f7d-a7be-87cb28de77fa\") " pod="kube-system/kindnet-khx5l"
	Oct 25 09:54:06 embed-certs-846915 kubelet[1324]: I1025 09:54:06.329052    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ff535da-325f-4c85-a30a-d044753b2895-xtables-lock\") pod \"kube-proxy-kfqqh\" (UID: \"1ff535da-325f-4c85-a30a-d044753b2895\") " pod="kube-system/kube-proxy-kfqqh"
	Oct 25 09:54:06 embed-certs-846915 kubelet[1324]: I1025 09:54:06.329074    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/333a7d45-8903-4f7d-a7be-87cb28de77fa-xtables-lock\") pod \"kindnet-khx5l\" (UID: \"333a7d45-8903-4f7d-a7be-87cb28de77fa\") " pod="kube-system/kindnet-khx5l"
	Oct 25 09:54:06 embed-certs-846915 kubelet[1324]: I1025 09:54:06.872963    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-khx5l" podStartSLOduration=0.872929053 podStartE2EDuration="872.929053ms" podCreationTimestamp="2025-10-25 09:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:54:06.872869435 +0000 UTC m=+6.145119312" watchObservedRunningTime="2025-10-25 09:54:06.872929053 +0000 UTC m=+6.145178931"
	Oct 25 09:54:06 embed-certs-846915 kubelet[1324]: I1025 09:54:06.873151    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kfqqh" podStartSLOduration=0.873139741 podStartE2EDuration="873.139741ms" podCreationTimestamp="2025-10-25 09:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:54:06.857340984 +0000 UTC m=+6.129590876" watchObservedRunningTime="2025-10-25 09:54:06.873139741 +0000 UTC m=+6.145389618"
	Oct 25 09:54:17 embed-certs-846915 kubelet[1324]: I1025 09:54:17.240636    1324 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 09:54:17 embed-certs-846915 kubelet[1324]: I1025 09:54:17.307118    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6743bff9-71c8-4295-960d-62d0c277c109-config-volume\") pod \"coredns-66bc5c9577-4w68k\" (UID: \"6743bff9-71c8-4295-960d-62d0c277c109\") " pod="kube-system/coredns-66bc5c9577-4w68k"
	Oct 25 09:54:17 embed-certs-846915 kubelet[1324]: I1025 09:54:17.307164    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8-tmp\") pod \"storage-provisioner\" (UID: \"fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8\") " pod="kube-system/storage-provisioner"
	Oct 25 09:54:17 embed-certs-846915 kubelet[1324]: I1025 09:54:17.307189    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfn7g\" (UniqueName: \"kubernetes.io/projected/6743bff9-71c8-4295-960d-62d0c277c109-kube-api-access-kfn7g\") pod \"coredns-66bc5c9577-4w68k\" (UID: \"6743bff9-71c8-4295-960d-62d0c277c109\") " pod="kube-system/coredns-66bc5c9577-4w68k"
	Oct 25 09:54:17 embed-certs-846915 kubelet[1324]: I1025 09:54:17.307245    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcg7d\" (UniqueName: \"kubernetes.io/projected/fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8-kube-api-access-dcg7d\") pod \"storage-provisioner\" (UID: \"fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8\") " pod="kube-system/storage-provisioner"
	Oct 25 09:54:17 embed-certs-846915 kubelet[1324]: I1025 09:54:17.886264    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.886240857 podStartE2EDuration="11.886240857s" podCreationTimestamp="2025-10-25 09:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:54:17.88590944 +0000 UTC m=+17.158159317" watchObservedRunningTime="2025-10-25 09:54:17.886240857 +0000 UTC m=+17.158490734"
	Oct 25 09:54:19 embed-certs-846915 kubelet[1324]: I1025 09:54:19.836115    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4w68k" podStartSLOduration=13.836088663 podStartE2EDuration="13.836088663s" podCreationTimestamp="2025-10-25 09:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:54:17.898480449 +0000 UTC m=+17.170730326" watchObservedRunningTime="2025-10-25 09:54:19.836088663 +0000 UTC m=+19.108338542"
	Oct 25 09:54:19 embed-certs-846915 kubelet[1324]: I1025 09:54:19.925240    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjvgx\" (UniqueName: \"kubernetes.io/projected/7deecc20-1509-4c22-90d3-ebbe7e9e363f-kube-api-access-fjvgx\") pod \"busybox\" (UID: \"7deecc20-1509-4c22-90d3-ebbe7e9e363f\") " pod="default/busybox"
	
	
	==> storage-provisioner [9c87582b37d9af1c10dde892bf1861770f5b9e8a0cf49f9682dcc3e787d84e43] <==
	I1025 09:54:17.629711       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:54:17.638103       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:54:17.638433       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:54:17.641806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:54:17.648587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:54:17.648765       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:54:17.648965       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-846915_37643b44-9a35-4731-877d-f348590c2e43!
	I1025 09:54:17.648969       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"29bb7dfc-96d0-4f89-994b-0b96c89c26b8", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-846915_37643b44-9a35-4731-877d-f348590c2e43 became leader
	W1025 09:54:17.651573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:54:17.654506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:54:17.749590       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-846915_37643b44-9a35-4731-877d-f348590c2e43!
	W1025 09:54:19.657828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:54:19.663170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:54:21.667159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:54:21.671952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:54:23.676308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:54:23.688106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:54:25.691875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:54:25.697116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:54:27.700016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:54:27.704309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:54:29.707822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:54:29.711745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-846915 -n embed-certs-846915
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-846915 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-656799 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-656799 --alsologtostderr -v=1: exit status 80 (1.830308674s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-656799 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:54:34.546642  453548 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:54:34.546768  453548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:34.546781  453548 out.go:374] Setting ErrFile to fd 2...
	I1025 09:54:34.546786  453548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:34.547126  453548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:54:34.547469  453548 out.go:368] Setting JSON to false
	I1025 09:54:34.547527  453548 mustload.go:65] Loading cluster: no-preload-656799
	I1025 09:54:34.548046  453548 config.go:182] Loaded profile config "no-preload-656799": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:34.548648  453548 cli_runner.go:164] Run: docker container inspect no-preload-656799 --format={{.State.Status}}
	I1025 09:54:34.567145  453548 host.go:66] Checking if "no-preload-656799" exists ...
	I1025 09:54:34.567430  453548 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:34.628673  453548 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:88 OomKillDisable:false NGoroutines:94 SystemTime:2025-10-25 09:54:34.618436678 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:34.629313  453548 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-656799 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 09:54:34.631227  453548 out.go:179] * Pausing node no-preload-656799 ... 
	I1025 09:54:34.632824  453548 host.go:66] Checking if "no-preload-656799" exists ...
	I1025 09:54:34.633069  453548 ssh_runner.go:195] Run: systemctl --version
	I1025 09:54:34.633103  453548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-656799
	I1025 09:54:34.653657  453548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33245 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/no-preload-656799/id_rsa Username:docker}
	I1025 09:54:34.762485  453548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:54:34.778795  453548 pause.go:52] kubelet running: true
	I1025 09:54:34.778860  453548 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:54:35.004436  453548 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:54:35.004547  453548 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:54:35.089781  453548 cri.go:89] found id: "9672e2b09621917c8753e5c69bbf1397081ae463b4fcf497c6aa6562d4b475d8"
	I1025 09:54:35.089811  453548 cri.go:89] found id: "f4d5c57b415b11d71b55538ee6f875fbd2524c78bbfa0f6e22f11fbe7622f2fb"
	I1025 09:54:35.089817  453548 cri.go:89] found id: "e995999ac2d28a730193b4932ce9f0a03b7388dd1c393907b0ad9b4e573b6329"
	I1025 09:54:35.089821  453548 cri.go:89] found id: "891b68d0f84289dab3ab047662084fe3d552922e5f89141313e5f0f5b1b1c532"
	I1025 09:54:35.089825  453548 cri.go:89] found id: "a5016565fe92c6c3e2b7f15714ef4e22e9a01067673cac39fa54fcac388a2b87"
	I1025 09:54:35.089833  453548 cri.go:89] found id: "8094fc5d7b37a8f46ff289c9c571c8256e7a44a478343b03510438967ec370e0"
	I1025 09:54:35.089837  453548 cri.go:89] found id: "a6c43c376a4b3de0805237ed87bb2bed809e8771389a9c4f6da15c3125a99803"
	I1025 09:54:35.089841  453548 cri.go:89] found id: "a75cff0462b2260fa975ed411fc9a80d7004abff2c65effeccfd7e1fe5b26257"
	I1025 09:54:35.089845  453548 cri.go:89] found id: "9c6377eb3e36d19cb28a7bba69a7291abdf8d5f49afb81570a6dee23440be4c8"
	I1025 09:54:35.089873  453548 cri.go:89] found id: "f3921e136c33c717080e599280d369230d5c8c4d560187b222fd092310a533b7"
	I1025 09:54:35.089878  453548 cri.go:89] found id: ""
	I1025 09:54:35.089928  453548 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:54:35.106486  453548 retry.go:31] will retry after 194.783643ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:54:35Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:54:35.301836  453548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:54:35.315483  453548 pause.go:52] kubelet running: false
	I1025 09:54:35.315547  453548 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:54:35.476051  453548 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:54:35.476148  453548 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:54:35.557722  453548 cri.go:89] found id: "9672e2b09621917c8753e5c69bbf1397081ae463b4fcf497c6aa6562d4b475d8"
	I1025 09:54:35.557740  453548 cri.go:89] found id: "f4d5c57b415b11d71b55538ee6f875fbd2524c78bbfa0f6e22f11fbe7622f2fb"
	I1025 09:54:35.557744  453548 cri.go:89] found id: "e995999ac2d28a730193b4932ce9f0a03b7388dd1c393907b0ad9b4e573b6329"
	I1025 09:54:35.557748  453548 cri.go:89] found id: "891b68d0f84289dab3ab047662084fe3d552922e5f89141313e5f0f5b1b1c532"
	I1025 09:54:35.557751  453548 cri.go:89] found id: "a5016565fe92c6c3e2b7f15714ef4e22e9a01067673cac39fa54fcac388a2b87"
	I1025 09:54:35.557756  453548 cri.go:89] found id: "8094fc5d7b37a8f46ff289c9c571c8256e7a44a478343b03510438967ec370e0"
	I1025 09:54:35.557760  453548 cri.go:89] found id: "a6c43c376a4b3de0805237ed87bb2bed809e8771389a9c4f6da15c3125a99803"
	I1025 09:54:35.557764  453548 cri.go:89] found id: "a75cff0462b2260fa975ed411fc9a80d7004abff2c65effeccfd7e1fe5b26257"
	I1025 09:54:35.557768  453548 cri.go:89] found id: "9c6377eb3e36d19cb28a7bba69a7291abdf8d5f49afb81570a6dee23440be4c8"
	I1025 09:54:35.557782  453548 cri.go:89] found id: "f3921e136c33c717080e599280d369230d5c8c4d560187b222fd092310a533b7"
	I1025 09:54:35.557786  453548 cri.go:89] found id: ""
	I1025 09:54:35.557826  453548 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:54:35.572906  453548 retry.go:31] will retry after 488.973351ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:54:35Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:54:36.062649  453548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:54:36.076021  453548 pause.go:52] kubelet running: false
	I1025 09:54:36.076066  453548 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:54:36.218815  453548 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:54:36.218902  453548 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:54:36.285054  453548 cri.go:89] found id: "9672e2b09621917c8753e5c69bbf1397081ae463b4fcf497c6aa6562d4b475d8"
	I1025 09:54:36.285079  453548 cri.go:89] found id: "f4d5c57b415b11d71b55538ee6f875fbd2524c78bbfa0f6e22f11fbe7622f2fb"
	I1025 09:54:36.285083  453548 cri.go:89] found id: "e995999ac2d28a730193b4932ce9f0a03b7388dd1c393907b0ad9b4e573b6329"
	I1025 09:54:36.285086  453548 cri.go:89] found id: "891b68d0f84289dab3ab047662084fe3d552922e5f89141313e5f0f5b1b1c532"
	I1025 09:54:36.285089  453548 cri.go:89] found id: "a5016565fe92c6c3e2b7f15714ef4e22e9a01067673cac39fa54fcac388a2b87"
	I1025 09:54:36.285092  453548 cri.go:89] found id: "8094fc5d7b37a8f46ff289c9c571c8256e7a44a478343b03510438967ec370e0"
	I1025 09:54:36.285094  453548 cri.go:89] found id: "a6c43c376a4b3de0805237ed87bb2bed809e8771389a9c4f6da15c3125a99803"
	I1025 09:54:36.285097  453548 cri.go:89] found id: "a75cff0462b2260fa975ed411fc9a80d7004abff2c65effeccfd7e1fe5b26257"
	I1025 09:54:36.285099  453548 cri.go:89] found id: "9c6377eb3e36d19cb28a7bba69a7291abdf8d5f49afb81570a6dee23440be4c8"
	I1025 09:54:36.285104  453548 cri.go:89] found id: "f3921e136c33c717080e599280d369230d5c8c4d560187b222fd092310a533b7"
	I1025 09:54:36.285107  453548 cri.go:89] found id: ""
	I1025 09:54:36.285143  453548 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:54:36.298885  453548 out.go:203] 
	W1025 09:54:36.300183  453548 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:54:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:54:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:54:36.300208  453548 out.go:285] * 
	* 
	W1025 09:54:36.304220  453548 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:54:36.305265  453548 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-656799 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-656799
helpers_test.go:243: (dbg) docker inspect no-preload-656799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3",
	        "Created": "2025-10-25T09:52:29.632041057Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 446033,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:53:58.528273469Z",
	            "FinishedAt": "2025-10-25T09:53:57.464607471Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3/hosts",
	        "LogPath": "/var/lib/docker/containers/8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3/8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3-json.log",
	        "Name": "/no-preload-656799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-656799:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-656799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3",
	                "LowerDir": "/var/lib/docker/overlay2/02618d7f775b19d8209d62a9f9c27036442b89e111a2465ca1e3390ba980e37b-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/02618d7f775b19d8209d62a9f9c27036442b89e111a2465ca1e3390ba980e37b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/02618d7f775b19d8209d62a9f9c27036442b89e111a2465ca1e3390ba980e37b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/02618d7f775b19d8209d62a9f9c27036442b89e111a2465ca1e3390ba980e37b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-656799",
	                "Source": "/var/lib/docker/volumes/no-preload-656799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-656799",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-656799",
	                "name.minikube.sigs.k8s.io": "no-preload-656799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f74464e82058e9f93b7b5fc771219d87b40a4968978299b389e955aa2f446e22",
	            "SandboxKey": "/var/run/docker/netns/f74464e82058",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33245"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33246"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33249"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33247"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33248"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-656799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:90:92:e1:5a:50",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c5f8d7127b2abc1fa122a07d1a58513d1f998c751b6e0894b37ec014b426c376",
	                    "EndpointID": "0c143253b3a5a8166921a225502d7e3e344bf94a150612631afce23fc312a46b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-656799",
	                        "8ccea090eb6c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-656799 -n no-preload-656799
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-656799 -n no-preload-656799: exit status 2 (342.68361ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-656799 logs -n 25
E1025 09:54:37.499480  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/auto-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-656799 logs -n 25: (1.301615756s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p newest-cni-042675 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-042675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p newest-cni-042675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-676314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ stop    │ -p old-k8s-version-676314 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ image   │ newest-cni-042675 image list --format=json                                                                                                                                                                                                    │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ pause   │ -p newest-cni-042675 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ delete  │ -p newest-cni-042675                                                                                                                                                                                                                          │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p no-preload-656799 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ delete  │ -p newest-cni-042675                                                                                                                                                                                                                          │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ delete  │ -p disable-driver-mounts-001549                                                                                                                                                                                                               │ disable-driver-mounts-001549 │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p embed-certs-846915 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ stop    │ -p no-preload-656799 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-676314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p old-k8s-version-676314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-880773 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-656799 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p no-preload-656799 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ stop    │ -p default-k8s-diff-port-880773 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-880773 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ start   │ -p default-k8s-diff-port-880773 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-846915 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ stop    │ -p embed-certs-846915 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ image   │ no-preload-656799 image list --format=json                                                                                                                                                                                                    │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ pause   │ -p no-preload-656799 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:54:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:54:19.275788  449952 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:54:19.275916  449952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:19.275925  449952 out.go:374] Setting ErrFile to fd 2...
	I1025 09:54:19.275930  449952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:19.276131  449952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:54:19.276587  449952 out.go:368] Setting JSON to false
	I1025 09:54:19.278081  449952 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5803,"bootTime":1761380256,"procs":397,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:54:19.278181  449952 start.go:141] virtualization: kvm guest
	I1025 09:54:19.280051  449952 out.go:179] * [default-k8s-diff-port-880773] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:54:19.281403  449952 notify.go:220] Checking for updates...
	I1025 09:54:19.281428  449952 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:54:19.282722  449952 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:54:19.283928  449952 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:19.285222  449952 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:54:19.286379  449952 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:54:19.287745  449952 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:54:19.289294  449952 config.go:182] Loaded profile config "default-k8s-diff-port-880773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:19.289852  449952 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:54:19.314779  449952 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:54:19.314881  449952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:19.376455  449952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-25 09:54:19.36493292 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:19.376554  449952 docker.go:318] overlay module found
	I1025 09:54:19.377788  449952 out.go:179] * Using the docker driver based on existing profile
	I1025 09:54:19.378682  449952 start.go:305] selected driver: docker
	I1025 09:54:19.378698  449952 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-880773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:19.378796  449952 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:54:19.379365  449952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:19.439139  449952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-25 09:54:19.42844643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:19.439456  449952 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:54:19.439486  449952 cni.go:84] Creating CNI manager for ""
	I1025 09:54:19.439535  449952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:54:19.439596  449952 start.go:349] cluster config:
	{Name:default-k8s-diff-port-880773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:19.441502  449952 out.go:179] * Starting "default-k8s-diff-port-880773" primary control-plane node in "default-k8s-diff-port-880773" cluster
	I1025 09:54:19.442631  449952 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:54:19.443961  449952 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:54:19.445195  449952 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:54:19.445250  449952 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:54:19.445263  449952 cache.go:58] Caching tarball of preloaded images
	I1025 09:54:19.445295  449952 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:54:19.445383  449952 preload.go:233] Found /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:54:19.445399  449952 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:54:19.445551  449952 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/config.json ...
	I1025 09:54:19.469540  449952 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:54:19.469567  449952 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:54:19.469589  449952 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:54:19.469624  449952 start.go:360] acquireMachinesLock for default-k8s-diff-port-880773: {Name:mk083ef9abd9d3dbc7e696ddb5b045b01f4c2bf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:54:19.469696  449952 start.go:364] duration metric: took 50.424µs to acquireMachinesLock for "default-k8s-diff-port-880773"
	I1025 09:54:19.469720  449952 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:54:19.469728  449952 fix.go:54] fixHost starting: 
	I1025 09:54:19.470052  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:19.492315  449952 fix.go:112] recreateIfNeeded on default-k8s-diff-port-880773: state=Stopped err=<nil>
	W1025 09:54:19.492399  449952 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 09:54:15.475986  440020 node_ready.go:57] node "embed-certs-846915" has "Ready":"False" status (will retry)
	I1025 09:54:17.476904  440020 node_ready.go:49] node "embed-certs-846915" is "Ready"
	I1025 09:54:17.476939  440020 node_ready.go:38] duration metric: took 11.003723459s for node "embed-certs-846915" to be "Ready" ...
	I1025 09:54:17.476955  440020 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:54:17.477016  440020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:54:17.489612  440020 api_server.go:72] duration metric: took 11.446400559s to wait for apiserver process to appear ...
	I1025 09:54:17.489645  440020 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:54:17.489664  440020 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:54:17.495599  440020 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1025 09:54:17.496792  440020 api_server.go:141] control plane version: v1.34.1
	I1025 09:54:17.496826  440020 api_server.go:131] duration metric: took 7.172976ms to wait for apiserver health ...
	I1025 09:54:17.496835  440020 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:54:17.500516  440020 system_pods.go:59] 8 kube-system pods found
	I1025 09:54:17.500592  440020 system_pods.go:61] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:54:17.500600  440020 system_pods.go:61] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running
	I1025 09:54:17.500610  440020 system_pods.go:61] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:54:17.500613  440020 system_pods.go:61] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running
	I1025 09:54:17.500617  440020 system_pods.go:61] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running
	I1025 09:54:17.500620  440020 system_pods.go:61] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:54:17.500623  440020 system_pods.go:61] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running
	I1025 09:54:17.500627  440020 system_pods.go:61] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:54:17.500643  440020 system_pods.go:74] duration metric: took 3.795746ms to wait for pod list to return data ...
	I1025 09:54:17.500654  440020 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:54:17.503287  440020 default_sa.go:45] found service account: "default"
	I1025 09:54:17.503309  440020 default_sa.go:55] duration metric: took 2.649102ms for default service account to be created ...
	I1025 09:54:17.503319  440020 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:54:17.506326  440020 system_pods.go:86] 8 kube-system pods found
	I1025 09:54:17.506368  440020 system_pods.go:89] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:54:17.506374  440020 system_pods.go:89] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running
	I1025 09:54:17.506380  440020 system_pods.go:89] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:54:17.506390  440020 system_pods.go:89] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running
	I1025 09:54:17.506397  440020 system_pods.go:89] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running
	I1025 09:54:17.506400  440020 system_pods.go:89] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:54:17.506405  440020 system_pods.go:89] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running
	I1025 09:54:17.506410  440020 system_pods.go:89] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:54:17.506433  440020 retry.go:31] will retry after 188.876759ms: missing components: kube-dns
	I1025 09:54:17.700456  440020 system_pods.go:86] 8 kube-system pods found
	I1025 09:54:17.700546  440020 system_pods.go:89] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:54:17.700558  440020 system_pods.go:89] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running
	I1025 09:54:17.700568  440020 system_pods.go:89] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:54:17.700582  440020 system_pods.go:89] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running
	I1025 09:54:17.700588  440020 system_pods.go:89] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running
	I1025 09:54:17.700593  440020 system_pods.go:89] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:54:17.700599  440020 system_pods.go:89] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running
	I1025 09:54:17.700612  440020 system_pods.go:89] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:54:17.700632  440020 retry.go:31] will retry after 250.335068ms: missing components: kube-dns
	I1025 09:54:17.955256  440020 system_pods.go:86] 8 kube-system pods found
	I1025 09:54:17.955289  440020 system_pods.go:89] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Running
	I1025 09:54:17.955295  440020 system_pods.go:89] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running
	I1025 09:54:17.955298  440020 system_pods.go:89] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:54:17.955302  440020 system_pods.go:89] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running
	I1025 09:54:17.955307  440020 system_pods.go:89] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running
	I1025 09:54:17.955311  440020 system_pods.go:89] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:54:17.955314  440020 system_pods.go:89] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running
	I1025 09:54:17.955317  440020 system_pods.go:89] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Running
	I1025 09:54:17.955324  440020 system_pods.go:126] duration metric: took 451.999845ms to wait for k8s-apps to be running ...
	I1025 09:54:17.955332  440020 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:54:17.955420  440020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:54:17.970053  440020 system_svc.go:56] duration metric: took 14.706919ms WaitForService to wait for kubelet
	I1025 09:54:17.970086  440020 kubeadm.go:586] duration metric: took 11.926881356s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:54:17.970111  440020 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:54:17.973494  440020 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:54:17.973526  440020 node_conditions.go:123] node cpu capacity is 8
	I1025 09:54:17.973543  440020 node_conditions.go:105] duration metric: took 3.426431ms to run NodePressure ...
	I1025 09:54:17.973558  440020 start.go:241] waiting for startup goroutines ...
	I1025 09:54:17.973567  440020 start.go:246] waiting for cluster config update ...
	I1025 09:54:17.973582  440020 start.go:255] writing updated cluster config ...
	I1025 09:54:17.973852  440020 ssh_runner.go:195] Run: rm -f paused
	I1025 09:54:17.978265  440020 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:54:17.982758  440020 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4w68k" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:17.987122  440020 pod_ready.go:94] pod "coredns-66bc5c9577-4w68k" is "Ready"
	I1025 09:54:17.987148  440020 pod_ready.go:86] duration metric: took 4.365303ms for pod "coredns-66bc5c9577-4w68k" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:17.989310  440020 pod_ready.go:83] waiting for pod "etcd-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:17.993594  440020 pod_ready.go:94] pod "etcd-embed-certs-846915" is "Ready"
	I1025 09:54:17.993619  440020 pod_ready.go:86] duration metric: took 4.284136ms for pod "etcd-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:17.995810  440020 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:17.999546  440020 pod_ready.go:94] pod "kube-apiserver-embed-certs-846915" is "Ready"
	I1025 09:54:17.999606  440020 pod_ready.go:86] duration metric: took 3.774304ms for pod "kube-apiserver-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:18.001621  440020 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:18.384665  440020 pod_ready.go:94] pod "kube-controller-manager-embed-certs-846915" is "Ready"
	I1025 09:54:18.384701  440020 pod_ready.go:86] duration metric: took 383.060784ms for pod "kube-controller-manager-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:18.583914  440020 pod_ready.go:83] waiting for pod "kube-proxy-kfqqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:18.982945  440020 pod_ready.go:94] pod "kube-proxy-kfqqh" is "Ready"
	I1025 09:54:18.982973  440020 pod_ready.go:86] duration metric: took 399.034255ms for pod "kube-proxy-kfqqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:19.184109  440020 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:19.584000  440020 pod_ready.go:94] pod "kube-scheduler-embed-certs-846915" is "Ready"
	I1025 09:54:19.584035  440020 pod_ready.go:86] duration metric: took 399.892029ms for pod "kube-scheduler-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:19.584051  440020 pod_ready.go:40] duration metric: took 1.605758265s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:54:19.650747  440020 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:54:19.652803  440020 out.go:179] * Done! kubectl is now configured to use "embed-certs-846915" cluster and "default" namespace by default
	W1025 09:54:16.068318  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	W1025 09:54:18.567974  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	I1025 09:54:18.301621  445741 pod_ready.go:94] pod "coredns-66bc5c9577-sw9hv" is "Ready"
	I1025 09:54:18.301648  445741 pod_ready.go:86] duration metric: took 9.506322482s for pod "coredns-66bc5c9577-sw9hv" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:18.304547  445741 pod_ready.go:83] waiting for pod "etcd-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:54:20.312171  445741 pod_ready.go:104] pod "etcd-no-preload-656799" is not "Ready", error: <nil>
	I1025 09:54:21.809723  445741 pod_ready.go:94] pod "etcd-no-preload-656799" is "Ready"
	I1025 09:54:21.809749  445741 pod_ready.go:86] duration metric: took 3.505178884s for pod "etcd-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:21.812231  445741 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:21.816695  445741 pod_ready.go:94] pod "kube-apiserver-no-preload-656799" is "Ready"
	I1025 09:54:21.816722  445741 pod_ready.go:86] duration metric: took 4.466508ms for pod "kube-apiserver-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:21.819011  445741 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:21.823589  445741 pod_ready.go:94] pod "kube-controller-manager-no-preload-656799" is "Ready"
	I1025 09:54:21.823628  445741 pod_ready.go:86] duration metric: took 4.593239ms for pod "kube-controller-manager-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:21.825939  445741 pod_ready.go:83] waiting for pod "kube-proxy-gfph2" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:22.010836  445741 pod_ready.go:94] pod "kube-proxy-gfph2" is "Ready"
	I1025 09:54:22.010862  445741 pod_ready.go:86] duration metric: took 184.902324ms for pod "kube-proxy-gfph2" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:22.210739  445741 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:22.608665  445741 pod_ready.go:94] pod "kube-scheduler-no-preload-656799" is "Ready"
	I1025 09:54:22.608695  445741 pod_ready.go:86] duration metric: took 397.92747ms for pod "kube-scheduler-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:22.608710  445741 pod_ready.go:40] duration metric: took 13.818887723s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:54:22.670288  445741 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:54:22.672465  445741 out.go:179] * Done! kubectl is now configured to use "no-preload-656799" cluster and "default" namespace by default
	I1025 09:54:19.494507  449952 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-880773" ...
	I1025 09:54:19.494587  449952 cli_runner.go:164] Run: docker start default-k8s-diff-port-880773
	I1025 09:54:19.824726  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:19.851116  449952 kic.go:430] container "default-k8s-diff-port-880773" state is running.
	I1025 09:54:19.851830  449952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-880773
	I1025 09:54:19.874663  449952 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/config.json ...
	I1025 09:54:19.874958  449952 machine.go:93] provisionDockerMachine start ...
	I1025 09:54:19.875036  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:19.900142  449952 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:19.900490  449952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33250 <nil> <nil>}
	I1025 09:54:19.900509  449952 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:54:19.901160  449952 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54890->127.0.0.1:33250: read: connection reset by peer
	I1025 09:54:23.064068  449952 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-880773
	
	I1025 09:54:23.064110  449952 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-880773"
	I1025 09:54:23.064192  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:23.086772  449952 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:23.087065  449952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33250 <nil> <nil>}
	I1025 09:54:23.087087  449952 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-880773 && echo "default-k8s-diff-port-880773" | sudo tee /etc/hostname
	I1025 09:54:23.252426  449952 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-880773
	
	I1025 09:54:23.252521  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:23.273044  449952 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:23.273316  449952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33250 <nil> <nil>}
	I1025 09:54:23.273335  449952 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-880773' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-880773/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-880773' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:54:23.424572  449952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:54:23.424603  449952 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 09:54:23.424629  449952 ubuntu.go:190] setting up certificates
	I1025 09:54:23.424642  449952 provision.go:84] configureAuth start
	I1025 09:54:23.424716  449952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-880773
	I1025 09:54:23.447850  449952 provision.go:143] copyHostCerts
	I1025 09:54:23.447922  449952 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem, removing ...
	I1025 09:54:23.447939  449952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:54:23.448010  449952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 09:54:23.448121  449952 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem, removing ...
	I1025 09:54:23.448133  449952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:54:23.448172  449952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 09:54:23.448307  449952 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem, removing ...
	I1025 09:54:23.448322  449952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:54:23.448386  449952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 09:54:23.448466  449952 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-880773 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-880773 localhost minikube]
	I1025 09:54:23.670392  449952 provision.go:177] copyRemoteCerts
	I1025 09:54:23.670473  449952 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:54:23.670534  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:23.695861  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:23.810003  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:54:23.831919  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1025 09:54:23.855020  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:54:23.876651  449952 provision.go:87] duration metric: took 451.986685ms to configureAuth
	I1025 09:54:23.876682  449952 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:54:23.876901  449952 config.go:182] Loaded profile config "default-k8s-diff-port-880773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:23.877015  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:23.898381  449952 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:23.898653  449952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33250 <nil> <nil>}
	I1025 09:54:23.898684  449952 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1025 09:54:20.568510  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	W1025 09:54:22.569444  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	W1025 09:54:25.068911  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	I1025 09:54:24.748214  449952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:54:24.748254  449952 machine.go:96] duration metric: took 4.873275374s to provisionDockerMachine
	I1025 09:54:24.748278  449952 start.go:293] postStartSetup for "default-k8s-diff-port-880773" (driver="docker")
	I1025 09:54:24.748293  449952 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:54:24.748387  449952 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:54:24.748520  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:24.768940  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:24.873795  449952 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:54:24.877543  449952 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:54:24.877575  449952 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:54:24.877589  449952 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 09:54:24.877661  449952 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 09:54:24.877782  449952 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> 1341452.pem in /etc/ssl/certs
	I1025 09:54:24.877958  449952 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:54:24.887735  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:54:24.906567  449952 start.go:296] duration metric: took 158.269737ms for postStartSetup
	I1025 09:54:24.906638  449952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:54:24.906671  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:24.925060  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:25.024684  449952 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:54:25.029312  449952 fix.go:56] duration metric: took 5.559580439s for fixHost
	I1025 09:54:25.029335  449952 start.go:83] releasing machines lock for "default-k8s-diff-port-880773", held for 5.559626356s
	I1025 09:54:25.029412  449952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-880773
	I1025 09:54:25.053651  449952 ssh_runner.go:195] Run: cat /version.json
	I1025 09:54:25.053671  449952 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:54:25.053710  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:25.053740  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:25.076792  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:25.077574  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:25.177839  449952 ssh_runner.go:195] Run: systemctl --version
	I1025 09:54:25.232420  449952 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:54:25.269857  449952 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:54:25.274931  449952 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:54:25.275022  449952 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:54:25.283809  449952 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:54:25.283844  449952 start.go:495] detecting cgroup driver to use...
	I1025 09:54:25.283873  449952 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:54:25.283907  449952 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:54:25.298715  449952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:54:25.311114  449952 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:54:25.311179  449952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:54:25.326245  449952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:54:25.338983  449952 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:54:25.421886  449952 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:54:25.507785  449952 docker.go:234] disabling docker service ...
	I1025 09:54:25.507851  449952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:54:25.522758  449952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:54:25.535545  449952 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:54:25.624987  449952 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:54:25.708591  449952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:54:25.721462  449952 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:54:25.736203  449952 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:54:25.736286  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.745513  449952 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:54:25.745572  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.754426  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.763537  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.772424  449952 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:54:25.780767  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.789663  449952 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.798468  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.807406  449952 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:54:25.815004  449952 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:54:25.822998  449952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:25.903676  449952 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:54:26.020906  449952 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:54:26.020973  449952 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:54:26.025150  449952 start.go:563] Will wait 60s for crictl version
	I1025 09:54:26.025208  449952 ssh_runner.go:195] Run: which crictl
	I1025 09:54:26.029013  449952 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:54:26.057753  449952 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:54:26.057819  449952 ssh_runner.go:195] Run: crio --version
	I1025 09:54:26.086687  449952 ssh_runner.go:195] Run: crio --version
	I1025 09:54:26.116337  449952 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:54:26.117443  449952 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-880773 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:54:26.135714  449952 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1025 09:54:26.140427  449952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:54:26.154403  449952 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-880773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:54:26.154570  449952 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:54:26.154635  449952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:54:26.192928  449952 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:54:26.192961  449952 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:54:26.193024  449952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:54:26.221578  449952 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:54:26.221602  449952 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:54:26.221611  449952 kubeadm.go:934] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1025 09:54:26.221708  449952 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-880773 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:54:26.221767  449952 ssh_runner.go:195] Run: crio config
	I1025 09:54:26.266519  449952 cni.go:84] Creating CNI manager for ""
	I1025 09:54:26.266551  449952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:54:26.266577  449952 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:54:26.266705  449952 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-880773 NodeName:default-k8s-diff-port-880773 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:54:26.266942  449952 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-880773"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:54:26.267030  449952 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:54:26.276099  449952 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:54:26.276158  449952 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:54:26.283856  449952 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1025 09:54:26.296736  449952 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:54:26.309600  449952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1025 09:54:26.322267  449952 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:54:26.325950  449952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:54:26.336085  449952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:26.418603  449952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:54:26.445329  449952 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773 for IP: 192.168.94.2
	I1025 09:54:26.445370  449952 certs.go:195] generating shared ca certs ...
	I1025 09:54:26.445391  449952 certs.go:227] acquiring lock for ca certs: {Name:mk84f00dc0ba6e3a6eb84ff47b0ea60692217fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:26.445589  449952 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key
	I1025 09:54:26.445651  449952 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key
	I1025 09:54:26.445663  449952 certs.go:257] generating profile certs ...
	I1025 09:54:26.445763  449952 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/client.key
	I1025 09:54:26.445836  449952 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.key.bf049977
	I1025 09:54:26.445889  449952 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/proxy-client.key
	I1025 09:54:26.446021  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem (1338 bytes)
	W1025 09:54:26.446059  449952 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145_empty.pem, impossibly tiny 0 bytes
	I1025 09:54:26.446071  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:54:26.446100  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:54:26.446130  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:54:26.446159  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem (1675 bytes)
	I1025 09:54:26.446208  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:54:26.447082  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:54:26.467801  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:54:26.487512  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:54:26.507419  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:54:26.531864  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 09:54:26.550342  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:54:26.569273  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:54:26.587593  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:54:26.605286  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /usr/share/ca-certificates/1341452.pem (1708 bytes)
	I1025 09:54:26.623801  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:54:26.642803  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem --> /usr/share/ca-certificates/134145.pem (1338 bytes)
	I1025 09:54:26.660752  449952 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:54:26.674006  449952 ssh_runner.go:195] Run: openssl version
	I1025 09:54:26.680368  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:54:26.689226  449952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:26.693134  449952 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:59 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:26.693180  449952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:26.728010  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:54:26.736810  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134145.pem && ln -fs /usr/share/ca-certificates/134145.pem /etc/ssl/certs/134145.pem"
	I1025 09:54:26.746043  449952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134145.pem
	I1025 09:54:26.749893  449952 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:05 /usr/share/ca-certificates/134145.pem
	I1025 09:54:26.749943  449952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134145.pem
	I1025 09:54:26.785153  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134145.pem /etc/ssl/certs/51391683.0"
	I1025 09:54:26.794063  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1341452.pem && ln -fs /usr/share/ca-certificates/1341452.pem /etc/ssl/certs/1341452.pem"
	I1025 09:54:26.802929  449952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1341452.pem
	I1025 09:54:26.807038  449952 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:05 /usr/share/ca-certificates/1341452.pem
	I1025 09:54:26.807101  449952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1341452.pem
	I1025 09:54:26.844046  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1341452.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:54:26.852738  449952 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:54:26.856516  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:54:26.892058  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:54:26.928987  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:54:26.978149  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:54:27.021912  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:54:27.075255  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:54:27.132302  449952 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-880773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:27.132461  449952 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:54:27.132541  449952 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:54:27.166099  449952 cri.go:89] found id: "8a40c304121945c99334f375a4fc8f1073390b82cca6a44c6e2b224a5804ed43"
	I1025 09:54:27.166122  449952 cri.go:89] found id: "1099e940dc59e4a7fc6edf4f82c427fc4633cbc73d1759f0ef430fccd002219f"
	I1025 09:54:27.166131  449952 cri.go:89] found id: "b7360eb6624b8284557553c607130a8087e3690512dcc9caea4351f9f876fd02"
	I1025 09:54:27.166136  449952 cri.go:89] found id: "9a7e2aef555d4452a0b73ff6d39e556aaf40affe43c7adcaf8fc119b3910c298"
	I1025 09:54:27.166141  449952 cri.go:89] found id: ""
	I1025 09:54:27.166194  449952 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:54:27.179061  449952 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:54:27Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:54:27.179160  449952 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:54:27.188157  449952 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:54:27.188180  449952 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:54:27.188228  449952 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:54:27.196153  449952 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:54:27.197499  449952 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-880773" does not appear in /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:27.198480  449952 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-130604/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-880773" cluster setting kubeconfig missing "default-k8s-diff-port-880773" context setting]
	I1025 09:54:27.199935  449952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:27.202256  449952 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:54:27.210782  449952 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1025 09:54:27.210819  449952 kubeadm.go:601] duration metric: took 22.632727ms to restartPrimaryControlPlane
	I1025 09:54:27.210865  449952 kubeadm.go:402] duration metric: took 78.655845ms to StartCluster
	I1025 09:54:27.210883  449952 settings.go:142] acquiring lock: {Name:mke1e64be0ec6edf2eef6e52eb10d83b59bb8c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:27.210942  449952 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:27.213436  449952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:27.213678  449952 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:54:27.213737  449952 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:54:27.213844  449952 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-880773"
	I1025 09:54:27.213859  449952 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-880773"
	I1025 09:54:27.213875  449952 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-880773"
	I1025 09:54:27.213886  449952 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-880773"
	I1025 09:54:27.213891  449952 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-880773"
	W1025 09:54:27.213898  449952 addons.go:247] addon dashboard should already be in state true
	I1025 09:54:27.213936  449952 host.go:66] Checking if "default-k8s-diff-port-880773" exists ...
	I1025 09:54:27.213939  449952 config.go:182] Loaded profile config "default-k8s-diff-port-880773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:27.213866  449952 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-880773"
	W1025 09:54:27.214066  449952 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:54:27.214095  449952 host.go:66] Checking if "default-k8s-diff-port-880773" exists ...
	I1025 09:54:27.214261  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:27.214456  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:27.214610  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:27.216018  449952 out.go:179] * Verifying Kubernetes components...
	I1025 09:54:27.217234  449952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:27.239708  449952 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-880773"
	W1025 09:54:27.239738  449952 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:54:27.239770  449952 host.go:66] Checking if "default-k8s-diff-port-880773" exists ...
	I1025 09:54:27.240253  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:27.242481  449952 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 09:54:27.242489  449952 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:54:27.243627  449952 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:54:27.243645  449952 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 09:54:27.243651  449952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:54:27.243712  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:27.247468  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 09:54:27.247486  449952 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 09:54:27.247539  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:27.267591  449952 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:54:27.267622  449952 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:54:27.267686  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:27.276575  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:27.285081  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:27.298498  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:27.368890  449952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:54:27.383755  449952 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-880773" to be "Ready" ...
	I1025 09:54:27.395977  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 09:54:27.396003  449952 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 09:54:27.406130  449952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:54:27.411552  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 09:54:27.411662  449952 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 09:54:27.419928  449952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:54:27.427159  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 09:54:27.427182  449952 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 09:54:27.446072  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 09:54:27.446100  449952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 09:54:27.471003  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 09:54:27.471033  449952 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 09:54:27.488999  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 09:54:27.489025  449952 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 09:54:27.503088  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 09:54:27.503113  449952 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 09:54:27.517184  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 09:54:27.517212  449952 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 09:54:27.530517  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:54:27.530540  449952 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 09:54:27.545962  449952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:54:29.018628  449952 node_ready.go:49] node "default-k8s-diff-port-880773" is "Ready"
	I1025 09:54:29.018668  449952 node_ready.go:38] duration metric: took 1.634880084s for node "default-k8s-diff-port-880773" to be "Ready" ...
	I1025 09:54:29.018686  449952 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:54:29.018740  449952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:54:29.506034  449952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.099869063s)
	I1025 09:54:29.506102  449952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.086134972s)
	I1025 09:54:29.506180  449952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.960181276s)
	I1025 09:54:29.506238  449952 api_server.go:72] duration metric: took 2.292529535s to wait for apiserver process to appear ...
	I1025 09:54:29.506289  449952 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:54:29.506306  449952 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1025 09:54:29.507716  449952 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-880773 addons enable metrics-server
	
	I1025 09:54:29.513028  449952 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:54:29.513055  449952 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:54:29.514792  449952 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1025 09:54:27.071249  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	W1025 09:54:29.568141  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	I1025 09:54:29.515891  449952 addons.go:514] duration metric: took 2.302163358s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1025 09:54:30.007035  449952 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1025 09:54:30.013495  449952 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:54:30.013618  449952 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:54:30.507293  449952 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1025 09:54:30.511406  449952 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1025 09:54:30.512375  449952 api_server.go:141] control plane version: v1.34.1
	I1025 09:54:30.512397  449952 api_server.go:131] duration metric: took 1.006101961s to wait for apiserver health ...
	I1025 09:54:30.512405  449952 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:54:30.515834  449952 system_pods.go:59] 8 kube-system pods found
	I1025 09:54:30.515887  449952 system_pods.go:61] "coredns-66bc5c9577-29ltg" [5d5247ec-619e-4bcb-82c5-1d5c0b42b685] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:54:30.515906  449952 system_pods.go:61] "etcd-default-k8s-diff-port-880773" [abe5a2b4-061a-47af-9c04-41b3261607b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:54:30.515928  449952 system_pods.go:61] "kindnet-cnqn8" [c804731f-754b-4ce1-9609-1a6fc8cf317c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 09:54:30.515939  449952 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-880773" [e8188321-7de4-49f4-97f9-e7aeca6d00db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:54:30.515950  449952 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-880773" [29ba481f-eea8-41cb-bbde-2551ae253f54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:54:30.515961  449952 system_pods.go:61] "kube-proxy-bg94v" [4b7ad6fe-03c3-41dd-9633-6ed6a648201f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 09:54:30.515973  449952 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-880773" [952c634f-45b2-401d-9a90-6d2123e839ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:54:30.515981  449952 system_pods.go:61] "storage-provisioner" [469fcc4c-281e-4595-aa3b-4ea853afb153] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:54:30.515998  449952 system_pods.go:74] duration metric: took 3.581249ms to wait for pod list to return data ...
	I1025 09:54:30.516008  449952 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:54:30.518119  449952 default_sa.go:45] found service account: "default"
	I1025 09:54:30.518138  449952 default_sa.go:55] duration metric: took 2.123947ms for default service account to be created ...
	I1025 09:54:30.518148  449952 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:54:30.520334  449952 system_pods.go:86] 8 kube-system pods found
	I1025 09:54:30.520372  449952 system_pods.go:89] "coredns-66bc5c9577-29ltg" [5d5247ec-619e-4bcb-82c5-1d5c0b42b685] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:54:30.520410  449952 system_pods.go:89] "etcd-default-k8s-diff-port-880773" [abe5a2b4-061a-47af-9c04-41b3261607b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:54:30.520421  449952 system_pods.go:89] "kindnet-cnqn8" [c804731f-754b-4ce1-9609-1a6fc8cf317c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 09:54:30.520430  449952 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-880773" [e8188321-7de4-49f4-97f9-e7aeca6d00db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:54:30.520439  449952 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-880773" [29ba481f-eea8-41cb-bbde-2551ae253f54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:54:30.520446  449952 system_pods.go:89] "kube-proxy-bg94v" [4b7ad6fe-03c3-41dd-9633-6ed6a648201f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 09:54:30.520452  449952 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-880773" [952c634f-45b2-401d-9a90-6d2123e839ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:54:30.520458  449952 system_pods.go:89] "storage-provisioner" [469fcc4c-281e-4595-aa3b-4ea853afb153] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:54:30.520464  449952 system_pods.go:126] duration metric: took 2.311292ms to wait for k8s-apps to be running ...
	I1025 09:54:30.520472  449952 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:54:30.520522  449952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:54:30.533459  449952 system_svc.go:56] duration metric: took 12.977874ms WaitForService to wait for kubelet
	I1025 09:54:30.533492  449952 kubeadm.go:586] duration metric: took 3.319782027s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:54:30.533514  449952 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:54:30.536489  449952 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:54:30.536518  449952 node_conditions.go:123] node cpu capacity is 8
	I1025 09:54:30.536536  449952 node_conditions.go:105] duration metric: took 3.015821ms to run NodePressure ...
	I1025 09:54:30.536552  449952 start.go:241] waiting for startup goroutines ...
	I1025 09:54:30.536565  449952 start.go:246] waiting for cluster config update ...
	I1025 09:54:30.536584  449952 start.go:255] writing updated cluster config ...
	I1025 09:54:30.536891  449952 ssh_runner.go:195] Run: rm -f paused
	I1025 09:54:30.540962  449952 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:54:30.544202  449952 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-29ltg" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:54:32.550284  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:54:32.069496  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	I1025 09:54:34.069854  441651 pod_ready.go:94] pod "coredns-5dd5756b68-qffxt" is "Ready"
	I1025 09:54:34.069885  441651 pod_ready.go:86] duration metric: took 37.507966247s for pod "coredns-5dd5756b68-qffxt" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.074076  441651 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.080201  441651 pod_ready.go:94] pod "etcd-old-k8s-version-676314" is "Ready"
	I1025 09:54:34.080244  441651 pod_ready.go:86] duration metric: took 6.136939ms for pod "etcd-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.084014  441651 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.089480  441651 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-676314" is "Ready"
	I1025 09:54:34.089513  441651 pod_ready.go:86] duration metric: took 5.467331ms for pod "kube-apiserver-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.092917  441651 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.266390  441651 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-676314" is "Ready"
	I1025 09:54:34.266419  441651 pod_ready.go:86] duration metric: took 173.473814ms for pod "kube-controller-manager-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.468281  441651 pod_ready.go:83] waiting for pod "kube-proxy-bsxx6" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.866765  441651 pod_ready.go:94] pod "kube-proxy-bsxx6" is "Ready"
	I1025 09:54:34.866794  441651 pod_ready.go:86] duration metric: took 398.483847ms for pod "kube-proxy-bsxx6" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:35.067296  441651 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:35.466580  441651 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-676314" is "Ready"
	I1025 09:54:35.466609  441651 pod_ready.go:86] duration metric: took 399.280578ms for pod "kube-scheduler-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:35.466637  441651 pod_ready.go:40] duration metric: took 38.910774112s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:54:35.520724  441651 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1025 09:54:35.522478  441651 out.go:203] 
	W1025 09:54:35.525673  441651 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1025 09:54:35.527157  441651 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1025 09:54:35.528391  441651 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-676314" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:54:22 no-preload-656799 crio[557]: time="2025-10-25T09:54:22.025206407Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:22 no-preload-656799 crio[557]: time="2025-10-25T09:54:22.064993644Z" level=info msg="Created container f3921e136c33c717080e599280d369230d5c8c4d560187b222fd092310a533b7: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-45hnf/kubernetes-dashboard" id=ceb53ef9-7d42-4a29-9f9e-73cd532c700b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:22 no-preload-656799 crio[557]: time="2025-10-25T09:54:22.065755223Z" level=info msg="Starting container: f3921e136c33c717080e599280d369230d5c8c4d560187b222fd092310a533b7" id=271c9f37-2846-4a95-9812-5b09e1f2a3f3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:54:22 no-preload-656799 crio[557]: time="2025-10-25T09:54:22.068002931Z" level=info msg="Started container" PID=1503 containerID=f3921e136c33c717080e599280d369230d5c8c4d560187b222fd092310a533b7 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-45hnf/kubernetes-dashboard id=271c9f37-2846-4a95-9812-5b09e1f2a3f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=395df48cb744d03089d4e34fbf8d6efd162aaba2d4dddfff463b3213b7b7dea9
	Oct 25 09:54:24 no-preload-656799 crio[557]: time="2025-10-25T09:54:24.816854328Z" level=info msg="Pulled image: registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=87278037-c133-4238-a499-405875d94ec7 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:54:24 no-preload-656799 crio[557]: time="2025-10-25T09:54:24.817497359Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=afbbbceb-6f25-484f-b23a-6aeb276df16c name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:24 no-preload-656799 crio[557]: time="2025-10-25T09:54:24.820101898Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d08771ea-08c9-441c-b7ec-9a0cd9812509 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:24 no-preload-656799 crio[557]: time="2025-10-25T09:54:24.825480734Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh/dashboard-metrics-scraper" id=dbe9e535-c1a2-4276-a6e8-13cef1da9465 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:24 no-preload-656799 crio[557]: time="2025-10-25T09:54:24.825610104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:24 no-preload-656799 crio[557]: time="2025-10-25T09:54:24.832113776Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:24 no-preload-656799 crio[557]: time="2025-10-25T09:54:24.832643461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:24 no-preload-656799 crio[557]: time="2025-10-25T09:54:24.861362196Z" level=info msg="Created container 469537152a37ffeb3adaf7d8d76d7a7b7d6f0f4e48dd494931507f69f4b88ce6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh/dashboard-metrics-scraper" id=dbe9e535-c1a2-4276-a6e8-13cef1da9465 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:24 no-preload-656799 crio[557]: time="2025-10-25T09:54:24.861925302Z" level=info msg="Starting container: 469537152a37ffeb3adaf7d8d76d7a7b7d6f0f4e48dd494931507f69f4b88ce6" id=b53d67a3-f16c-4c66-b2ff-4c469dd0ed68 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:54:24 no-preload-656799 crio[557]: time="2025-10-25T09:54:24.863731509Z" level=info msg="Started container" PID=1744 containerID=469537152a37ffeb3adaf7d8d76d7a7b7d6f0f4e48dd494931507f69f4b88ce6 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh/dashboard-metrics-scraper id=b53d67a3-f16c-4c66-b2ff-4c469dd0ed68 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7083f13ed83c751d35867c2af1705840a6de654ddc4ce55e2dbc5c7af81808a6
	Oct 25 09:54:25 no-preload-656799 crio[557]: time="2025-10-25T09:54:25.035971341Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e6dcd91f-e1d9-46ad-b045-68e9a37bcaf2 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:25 no-preload-656799 crio[557]: time="2025-10-25T09:54:25.039031296Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=16edc3fa-4326-4089-bd97-c2f9465ea214 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:25 no-preload-656799 crio[557]: time="2025-10-25T09:54:25.042407706Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh/dashboard-metrics-scraper" id=8ee8468d-dd50-45e8-9812-ea9fd895b058 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:25 no-preload-656799 crio[557]: time="2025-10-25T09:54:25.042545002Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:25 no-preload-656799 crio[557]: time="2025-10-25T09:54:25.051738636Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:25 no-preload-656799 crio[557]: time="2025-10-25T09:54:25.052266877Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:25 no-preload-656799 crio[557]: time="2025-10-25T09:54:25.078311722Z" level=info msg="Created container 9c6377eb3e36d19cb28a7bba69a7291abdf8d5f49afb81570a6dee23440be4c8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh/dashboard-metrics-scraper" id=8ee8468d-dd50-45e8-9812-ea9fd895b058 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:25 no-preload-656799 crio[557]: time="2025-10-25T09:54:25.079082991Z" level=info msg="Starting container: 9c6377eb3e36d19cb28a7bba69a7291abdf8d5f49afb81570a6dee23440be4c8" id=9765ff80-b0fb-43d9-9896-74e8a94a45b4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:54:25 no-preload-656799 crio[557]: time="2025-10-25T09:54:25.081473758Z" level=info msg="Started container" PID=1755 containerID=9c6377eb3e36d19cb28a7bba69a7291abdf8d5f49afb81570a6dee23440be4c8 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh/dashboard-metrics-scraper id=9765ff80-b0fb-43d9-9896-74e8a94a45b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7083f13ed83c751d35867c2af1705840a6de654ddc4ce55e2dbc5c7af81808a6
	Oct 25 09:54:26 no-preload-656799 crio[557]: time="2025-10-25T09:54:26.041832895Z" level=info msg="Removing container: 469537152a37ffeb3adaf7d8d76d7a7b7d6f0f4e48dd494931507f69f4b88ce6" id=f2f23fc3-39ed-4eba-8b2e-3c28c625a607 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:54:26 no-preload-656799 crio[557]: time="2025-10-25T09:54:26.051800419Z" level=info msg="Removed container 469537152a37ffeb3adaf7d8d76d7a7b7d6f0f4e48dd494931507f69f4b88ce6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh/dashboard-metrics-scraper" id=f2f23fc3-39ed-4eba-8b2e-3c28c625a607 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	9c6377eb3e36d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago      Exited              dashboard-metrics-scraper   1                   7083f13ed83c7       dashboard-metrics-scraper-6ffb444bf9-qgpjh   kubernetes-dashboard
	f3921e136c33c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   15 seconds ago      Running             kubernetes-dashboard        0                   395df48cb744d       kubernetes-dashboard-855c9754f9-45hnf        kubernetes-dashboard
	9672e2b096219       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           25 seconds ago      Running             coredns                     0                   b34368269f37d       coredns-66bc5c9577-sw9hv                     kube-system
	f3a7d9ca625b7       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           25 seconds ago      Running             busybox                     1                   771a965a5a19c       busybox                                      default
	f4d5c57b415b1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           29 seconds ago      Running             storage-provisioner         0                   eb71971fd35e6       storage-provisioner                          kube-system
	e995999ac2d28       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           29 seconds ago      Running             kindnet-cni                 0                   445fa6ed2f44b       kindnet-nbj7f                                kube-system
	891b68d0f8428       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           29 seconds ago      Running             kube-proxy                  0                   901fa2f0399cd       kube-proxy-gfph2                             kube-system
	a5016565fe92c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           32 seconds ago      Running             kube-controller-manager     0                   e0a9a0ead2e5c       kube-controller-manager-no-preload-656799    kube-system
	8094fc5d7b37a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           32 seconds ago      Running             kube-scheduler              0                   d1b1ff8bdf389       kube-scheduler-no-preload-656799             kube-system
	a6c43c376a4b3       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           32 seconds ago      Running             kube-apiserver              0                   15932ce04aeaa       kube-apiserver-no-preload-656799             kube-system
	a75cff0462b22       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           32 seconds ago      Running             etcd                        0                   924ec9fe168dc       etcd-no-preload-656799                       kube-system
	
	
	==> coredns [9672e2b09621917c8753e5c69bbf1397081ae463b4fcf497c6aa6562d4b475d8] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47145 - 19496 "HINFO IN 6985121326916034595.722092135468356731. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.07090186s
	
	
	==> describe nodes <==
	Name:               no-preload-656799
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-656799
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=no-preload-656799
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_53_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:53:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-656799
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:54:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:54:17 +0000   Sat, 25 Oct 2025 09:53:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:54:17 +0000   Sat, 25 Oct 2025 09:53:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:54:17 +0000   Sat, 25 Oct 2025 09:53:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:54:17 +0000   Sat, 25 Oct 2025 09:54:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-656799
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                5bcc7607-4d30-49cf-9ec1-c2712dc2e9c1
	  Boot ID:                    69cac88c-fbae-449a-9884-8eb99653f5b9
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 coredns-66bc5c9577-sw9hv                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     82s
	  kube-system                 etcd-no-preload-656799                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         87s
	  kube-system                 kindnet-nbj7f                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      83s
	  kube-system                 kube-apiserver-no-preload-656799              250m (3%)     0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-controller-manager-no-preload-656799     200m (2%)     0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-proxy-gfph2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-scheduler-no-preload-656799              100m (1%)     0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qgpjh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-45hnf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 81s                kube-proxy       
	  Normal  Starting                 28s                kube-proxy       
	  Normal  NodeHasSufficientMemory  87s                kubelet          Node no-preload-656799 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s                kubelet          Node no-preload-656799 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s                kubelet          Node no-preload-656799 status is now: NodeHasSufficientPID
	  Normal  Starting                 87s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           83s                node-controller  Node no-preload-656799 event: Registered Node no-preload-656799 in Controller
	  Normal  NodeReady                69s                kubelet          Node no-preload-656799 status is now: NodeReady
	  Normal  Starting                 33s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s (x8 over 33s)  kubelet          Node no-preload-656799 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s (x8 over 33s)  kubelet          Node no-preload-656799 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s (x8 over 33s)  kubelet          Node no-preload-656799 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node no-preload-656799 event: Registered Node no-preload-656799 in Controller
	
	
	==> dmesg <==
	[  +0.000024] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[Oct25 09:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[ +17.952906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 b8 8e e3 56 c9 08 06
	[  +0.000656] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[Oct25 09:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	[ +20.335832] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +1.293644] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[Oct25 09:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 68 92 7c c6 14 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +0.270958] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a d0 7b 0e 4a 8d 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[ +10.676024] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000020] ll header: 00000000: ff ff ff ff ff ff 1a 10 31 a9 02 ae 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	
	
	==> etcd [a75cff0462b2260fa975ed411fc9a80d7004abff2c65effeccfd7e1fe5b26257] <==
	{"level":"warn","ts":"2025-10-25T09:54:06.779545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.787093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.802075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.809106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.815576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.823654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.830108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.837440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.845759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.854124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.863708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.884009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.897133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.903822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.910411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.917114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.923956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.930646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.937619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.946698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.962673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.966158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.973831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.981573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:07.042800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41556","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:54:37 up  1:37,  0 user,  load average: 6.36, 4.73, 2.85
	Linux no-preload-656799 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e995999ac2d28a730193b4932ce9f0a03b7388dd1c393907b0ad9b4e573b6329] <==
	I1025 09:54:08.598154       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:54:08.613750       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 09:54:08.614031       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:54:08.614056       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:54:08.614095       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:54:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:54:08.905050       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:54:08.905084       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:54:08.905100       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:54:08.905273       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:54:09.205437       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:54:09.205469       1 metrics.go:72] Registering metrics
	I1025 09:54:09.205560       1 controller.go:711] "Syncing nftables rules"
	I1025 09:54:18.905330       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:54:18.905442       1 main.go:301] handling current node
	I1025 09:54:28.905542       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:54:28.905599       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a6c43c376a4b3de0805237ed87bb2bed809e8771389a9c4f6da15c3125a99803] <==
	I1025 09:54:07.591671       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 09:54:07.592774       1 aggregator.go:171] initial CRD sync complete...
	I1025 09:54:07.592795       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 09:54:07.593001       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:54:07.593034       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:54:07.593593       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 09:54:07.593859       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 09:54:07.593953       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:54:07.602438       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:54:07.608918       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:54:07.621210       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 09:54:07.621260       1 policy_source.go:240] refreshing policies
	I1025 09:54:07.629862       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:54:07.988312       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:54:08.033373       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:54:08.047189       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:54:08.077172       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:54:08.090903       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:54:08.155627       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.34.159"}
	I1025 09:54:08.174887       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.90.56"}
	I1025 09:54:08.495143       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:54:11.286450       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:54:11.435698       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:54:11.435698       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:54:11.532990       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a5016565fe92c6c3e2b7f15714ef4e22e9a01067673cac39fa54fcac388a2b87] <==
	I1025 09:54:10.929989       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:54:10.930242       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 09:54:10.930401       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:54:10.930486       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:54:10.930626       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:54:10.930690       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:54:10.930690       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 09:54:10.933499       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:54:10.934948       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 09:54:10.934998       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 09:54:10.935004       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:54:10.935026       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 09:54:10.935035       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 09:54:10.935040       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 09:54:10.937401       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:54:10.937422       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:54:10.937408       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:54:10.937610       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:54:10.940866       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:54:10.944160       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:54:10.946434       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:54:10.949637       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:54:10.949800       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:54:10.951905       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:54:20.882947       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [891b68d0f84289dab3ab047662084fe3d552922e5f89141313e5f0f5b1b1c532] <==
	I1025 09:54:08.391606       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:54:08.464928       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:54:08.565612       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:54:08.565656       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 09:54:08.565766       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:54:08.590332       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:54:08.590485       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:54:08.598021       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:54:08.598450       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:54:08.598470       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:54:08.600483       1 config.go:309] "Starting node config controller"
	I1025 09:54:08.600503       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:54:08.600615       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:54:08.600643       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:54:08.600704       1 config.go:200] "Starting service config controller"
	I1025 09:54:08.600710       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:54:08.600744       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:54:08.600749       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:54:08.701611       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:54:08.701675       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:54:08.701694       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:54:08.701708       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8094fc5d7b37a8f46ff289c9c571c8256e7a44a478343b03510438967ec370e0] <==
	I1025 09:54:06.820529       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:54:07.531182       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:54:07.531244       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:54:07.531258       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:54:07.531267       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:54:07.563460       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:54:07.563496       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:54:07.566885       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:54:07.567015       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:54:07.570237       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:54:07.570285       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:54:07.667459       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:54:08 no-preload-656799 kubelet[707]: E1025 09:54:08.648721     707 projected.go:196] Error preparing data for projected volume kube-api-access-hc4xt for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Oct 25 09:54:08 no-preload-656799 kubelet[707]: E1025 09:54:08.648808     707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e58484e4-93ad-4c1e-af87-8034efb88486-kube-api-access-hc4xt podName:e58484e4-93ad-4c1e-af87-8034efb88486 nodeName:}" failed. No retries permitted until 2025-10-25 09:54:09.648783827 +0000 UTC m=+4.787245494 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hc4xt" (UniqueName: "kubernetes.io/projected/e58484e4-93ad-4c1e-af87-8034efb88486-kube-api-access-hc4xt") pod "busybox" (UID: "e58484e4-93ad-4c1e-af87-8034efb88486") : object "default"/"kube-root-ca.crt" not registered
	Oct 25 09:54:09 no-preload-656799 kubelet[707]: E1025 09:54:09.553719     707 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 25 09:54:09 no-preload-656799 kubelet[707]: E1025 09:54:09.553848     707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b8784813-9a51-43f5-ae3a-d5f9a1cd7d41-config-volume podName:b8784813-9a51-43f5-ae3a-d5f9a1cd7d41 nodeName:}" failed. No retries permitted until 2025-10-25 09:54:11.553822833 +0000 UTC m=+6.692284498 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b8784813-9a51-43f5-ae3a-d5f9a1cd7d41-config-volume") pod "coredns-66bc5c9577-sw9hv" (UID: "b8784813-9a51-43f5-ae3a-d5f9a1cd7d41") : object "kube-system"/"coredns" not registered
	Oct 25 09:54:09 no-preload-656799 kubelet[707]: E1025 09:54:09.654903     707 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Oct 25 09:54:09 no-preload-656799 kubelet[707]: E1025 09:54:09.654938     707 projected.go:196] Error preparing data for projected volume kube-api-access-hc4xt for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Oct 25 09:54:09 no-preload-656799 kubelet[707]: E1025 09:54:09.655002     707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e58484e4-93ad-4c1e-af87-8034efb88486-kube-api-access-hc4xt podName:e58484e4-93ad-4c1e-af87-8034efb88486 nodeName:}" failed. No retries permitted until 2025-10-25 09:54:11.654987813 +0000 UTC m=+6.793449464 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hc4xt" (UniqueName: "kubernetes.io/projected/e58484e4-93ad-4c1e-af87-8034efb88486-kube-api-access-hc4xt") pod "busybox" (UID: "e58484e4-93ad-4c1e-af87-8034efb88486") : object "default"/"kube-root-ca.crt" not registered
	Oct 25 09:54:16 no-preload-656799 kubelet[707]: I1025 09:54:16.860921     707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 09:54:18 no-preload-656799 kubelet[707]: I1025 09:54:18.004895     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jr9z\" (UniqueName: \"kubernetes.io/projected/4bfa16b2-fe16-47c9-8bd7-63c64dae30ac-kube-api-access-7jr9z\") pod \"kubernetes-dashboard-855c9754f9-45hnf\" (UID: \"4bfa16b2-fe16-47c9-8bd7-63c64dae30ac\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-45hnf"
	Oct 25 09:54:18 no-preload-656799 kubelet[707]: I1025 09:54:18.004963     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9d8dbc62-2ba1-4794-baab-600f510e30ab-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-qgpjh\" (UID: \"9d8dbc62-2ba1-4794-baab-600f510e30ab\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh"
	Oct 25 09:54:18 no-preload-656799 kubelet[707]: I1025 09:54:18.005095     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfpbm\" (UniqueName: \"kubernetes.io/projected/9d8dbc62-2ba1-4794-baab-600f510e30ab-kube-api-access-hfpbm\") pod \"dashboard-metrics-scraper-6ffb444bf9-qgpjh\" (UID: \"9d8dbc62-2ba1-4794-baab-600f510e30ab\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh"
	Oct 25 09:54:18 no-preload-656799 kubelet[707]: I1025 09:54:18.005147     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4bfa16b2-fe16-47c9-8bd7-63c64dae30ac-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-45hnf\" (UID: \"4bfa16b2-fe16-47c9-8bd7-63c64dae30ac\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-45hnf"
	Oct 25 09:54:25 no-preload-656799 kubelet[707]: I1025 09:54:25.035534     707 scope.go:117] "RemoveContainer" containerID="469537152a37ffeb3adaf7d8d76d7a7b7d6f0f4e48dd494931507f69f4b88ce6"
	Oct 25 09:54:25 no-preload-656799 kubelet[707]: I1025 09:54:25.047215     707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-45hnf" podStartSLOduration=10.295019954 podStartE2EDuration="14.047196257s" podCreationTimestamp="2025-10-25 09:54:11 +0000 UTC" firstStartedPulling="2025-10-25 09:54:18.262795437 +0000 UTC m=+13.401257088" lastFinishedPulling="2025-10-25 09:54:22.01497174 +0000 UTC m=+17.153433391" observedRunningTime="2025-10-25 09:54:23.044930319 +0000 UTC m=+18.183391992" watchObservedRunningTime="2025-10-25 09:54:25.047196257 +0000 UTC m=+20.185657929"
	Oct 25 09:54:26 no-preload-656799 kubelet[707]: I1025 09:54:26.040338     707 scope.go:117] "RemoveContainer" containerID="469537152a37ffeb3adaf7d8d76d7a7b7d6f0f4e48dd494931507f69f4b88ce6"
	Oct 25 09:54:26 no-preload-656799 kubelet[707]: I1025 09:54:26.040640     707 scope.go:117] "RemoveContainer" containerID="9c6377eb3e36d19cb28a7bba69a7291abdf8d5f49afb81570a6dee23440be4c8"
	Oct 25 09:54:26 no-preload-656799 kubelet[707]: E1025 09:54:26.040838     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qgpjh_kubernetes-dashboard(9d8dbc62-2ba1-4794-baab-600f510e30ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh" podUID="9d8dbc62-2ba1-4794-baab-600f510e30ab"
	Oct 25 09:54:27 no-preload-656799 kubelet[707]: I1025 09:54:27.047458     707 scope.go:117] "RemoveContainer" containerID="9c6377eb3e36d19cb28a7bba69a7291abdf8d5f49afb81570a6dee23440be4c8"
	Oct 25 09:54:27 no-preload-656799 kubelet[707]: E1025 09:54:27.047720     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qgpjh_kubernetes-dashboard(9d8dbc62-2ba1-4794-baab-600f510e30ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh" podUID="9d8dbc62-2ba1-4794-baab-600f510e30ab"
	Oct 25 09:54:28 no-preload-656799 kubelet[707]: I1025 09:54:28.230192     707 scope.go:117] "RemoveContainer" containerID="9c6377eb3e36d19cb28a7bba69a7291abdf8d5f49afb81570a6dee23440be4c8"
	Oct 25 09:54:28 no-preload-656799 kubelet[707]: E1025 09:54:28.230440     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qgpjh_kubernetes-dashboard(9d8dbc62-2ba1-4794-baab-600f510e30ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh" podUID="9d8dbc62-2ba1-4794-baab-600f510e30ab"
	Oct 25 09:54:34 no-preload-656799 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:54:35 no-preload-656799 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:54:35 no-preload-656799 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 09:54:35 no-preload-656799 systemd[1]: kubelet.service: Consumed 1.234s CPU time.
	
	
	==> kubernetes-dashboard [f3921e136c33c717080e599280d369230d5c8c4d560187b222fd092310a533b7] <==
	2025/10/25 09:54:22 Starting overwatch
	2025/10/25 09:54:22 Using namespace: kubernetes-dashboard
	2025/10/25 09:54:22 Using in-cluster config to connect to apiserver
	2025/10/25 09:54:22 Using secret token for csrf signing
	2025/10/25 09:54:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:54:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:54:22 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 09:54:22 Generating JWE encryption key
	2025/10/25 09:54:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:54:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:54:22 Initializing JWE encryption key from synchronized object
	2025/10/25 09:54:22 Creating in-cluster Sidecar client
	2025/10/25 09:54:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:54:22 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [f4d5c57b415b11d71b55538ee6f875fbd2524c78bbfa0f6e22f11fbe7622f2fb] <==
	I1025 09:54:08.352829       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-656799 -n no-preload-656799
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-656799 -n no-preload-656799: exit status 2 (391.443764ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-656799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-656799
helpers_test.go:243: (dbg) docker inspect no-preload-656799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3",
	        "Created": "2025-10-25T09:52:29.632041057Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 446033,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:53:58.528273469Z",
	            "FinishedAt": "2025-10-25T09:53:57.464607471Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3/hosts",
	        "LogPath": "/var/lib/docker/containers/8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3/8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3-json.log",
	        "Name": "/no-preload-656799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-656799:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-656799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8ccea090eb6cd7e8aa22cc56ff6fae7cc9aec93a6905f15b0092990fd68811f3",
	                "LowerDir": "/var/lib/docker/overlay2/02618d7f775b19d8209d62a9f9c27036442b89e111a2465ca1e3390ba980e37b-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/02618d7f775b19d8209d62a9f9c27036442b89e111a2465ca1e3390ba980e37b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/02618d7f775b19d8209d62a9f9c27036442b89e111a2465ca1e3390ba980e37b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/02618d7f775b19d8209d62a9f9c27036442b89e111a2465ca1e3390ba980e37b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-656799",
	                "Source": "/var/lib/docker/volumes/no-preload-656799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-656799",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-656799",
	                "name.minikube.sigs.k8s.io": "no-preload-656799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f74464e82058e9f93b7b5fc771219d87b40a4968978299b389e955aa2f446e22",
	            "SandboxKey": "/var/run/docker/netns/f74464e82058",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33245"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33246"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33249"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33247"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33248"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-656799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:90:92:e1:5a:50",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c5f8d7127b2abc1fa122a07d1a58513d1f998c751b6e0894b37ec014b426c376",
	                    "EndpointID": "0c143253b3a5a8166921a225502d7e3e344bf94a150612631afce23fc312a46b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-656799",
	                        "8ccea090eb6c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-656799 -n no-preload-656799
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-656799 -n no-preload-656799: exit status 2 (411.784903ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-656799 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-656799 logs -n 25: (1.363172303s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p newest-cni-042675 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-042675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p newest-cni-042675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-676314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ stop    │ -p old-k8s-version-676314 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ image   │ newest-cni-042675 image list --format=json                                                                                                                                                                                                    │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ pause   │ -p newest-cni-042675 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ delete  │ -p newest-cni-042675                                                                                                                                                                                                                          │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p no-preload-656799 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ delete  │ -p newest-cni-042675                                                                                                                                                                                                                          │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ delete  │ -p disable-driver-mounts-001549                                                                                                                                                                                                               │ disable-driver-mounts-001549 │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p embed-certs-846915 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ stop    │ -p no-preload-656799 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-676314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p old-k8s-version-676314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-880773 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-656799 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p no-preload-656799 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ stop    │ -p default-k8s-diff-port-880773 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-880773 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ start   │ -p default-k8s-diff-port-880773 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-846915 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ stop    │ -p embed-certs-846915 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ image   │ no-preload-656799 image list --format=json                                                                                                                                                                                                    │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ pause   │ -p no-preload-656799 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:54:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:54:19.275788  449952 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:54:19.275916  449952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:19.275925  449952 out.go:374] Setting ErrFile to fd 2...
	I1025 09:54:19.275930  449952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:19.276131  449952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:54:19.276587  449952 out.go:368] Setting JSON to false
	I1025 09:54:19.278081  449952 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5803,"bootTime":1761380256,"procs":397,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:54:19.278181  449952 start.go:141] virtualization: kvm guest
	I1025 09:54:19.280051  449952 out.go:179] * [default-k8s-diff-port-880773] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:54:19.281403  449952 notify.go:220] Checking for updates...
	I1025 09:54:19.281428  449952 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:54:19.282722  449952 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:54:19.283928  449952 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:19.285222  449952 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:54:19.286379  449952 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:54:19.287745  449952 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:54:19.289294  449952 config.go:182] Loaded profile config "default-k8s-diff-port-880773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:19.289852  449952 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:54:19.314779  449952 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:54:19.314881  449952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:19.376455  449952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-25 09:54:19.36493292 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:19.376554  449952 docker.go:318] overlay module found
	I1025 09:54:19.377788  449952 out.go:179] * Using the docker driver based on existing profile
	I1025 09:54:19.378682  449952 start.go:305] selected driver: docker
	I1025 09:54:19.378698  449952 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-880773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:19.378796  449952 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:54:19.379365  449952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:19.439139  449952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-25 09:54:19.42844643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:19.439456  449952 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:54:19.439486  449952 cni.go:84] Creating CNI manager for ""
	I1025 09:54:19.439535  449952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:54:19.439596  449952 start.go:349] cluster config:
	{Name:default-k8s-diff-port-880773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:19.441502  449952 out.go:179] * Starting "default-k8s-diff-port-880773" primary control-plane node in "default-k8s-diff-port-880773" cluster
	I1025 09:54:19.442631  449952 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:54:19.443961  449952 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:54:19.445195  449952 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:54:19.445250  449952 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:54:19.445263  449952 cache.go:58] Caching tarball of preloaded images
	I1025 09:54:19.445295  449952 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:54:19.445383  449952 preload.go:233] Found /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:54:19.445399  449952 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:54:19.445551  449952 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/config.json ...
	I1025 09:54:19.469540  449952 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:54:19.469567  449952 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:54:19.469589  449952 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:54:19.469624  449952 start.go:360] acquireMachinesLock for default-k8s-diff-port-880773: {Name:mk083ef9abd9d3dbc7e696ddb5b045b01f4c2bf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:54:19.469696  449952 start.go:364] duration metric: took 50.424µs to acquireMachinesLock for "default-k8s-diff-port-880773"
	I1025 09:54:19.469720  449952 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:54:19.469728  449952 fix.go:54] fixHost starting: 
	I1025 09:54:19.470052  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:19.492315  449952 fix.go:112] recreateIfNeeded on default-k8s-diff-port-880773: state=Stopped err=<nil>
	W1025 09:54:19.492399  449952 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 09:54:15.475986  440020 node_ready.go:57] node "embed-certs-846915" has "Ready":"False" status (will retry)
	I1025 09:54:17.476904  440020 node_ready.go:49] node "embed-certs-846915" is "Ready"
	I1025 09:54:17.476939  440020 node_ready.go:38] duration metric: took 11.003723459s for node "embed-certs-846915" to be "Ready" ...
	I1025 09:54:17.476955  440020 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:54:17.477016  440020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:54:17.489612  440020 api_server.go:72] duration metric: took 11.446400559s to wait for apiserver process to appear ...
	I1025 09:54:17.489645  440020 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:54:17.489664  440020 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:54:17.495599  440020 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1025 09:54:17.496792  440020 api_server.go:141] control plane version: v1.34.1
	I1025 09:54:17.496826  440020 api_server.go:131] duration metric: took 7.172976ms to wait for apiserver health ...
	I1025 09:54:17.496835  440020 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:54:17.500516  440020 system_pods.go:59] 8 kube-system pods found
	I1025 09:54:17.500592  440020 system_pods.go:61] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:54:17.500600  440020 system_pods.go:61] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running
	I1025 09:54:17.500610  440020 system_pods.go:61] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:54:17.500613  440020 system_pods.go:61] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running
	I1025 09:54:17.500617  440020 system_pods.go:61] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running
	I1025 09:54:17.500620  440020 system_pods.go:61] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:54:17.500623  440020 system_pods.go:61] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running
	I1025 09:54:17.500627  440020 system_pods.go:61] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:54:17.500643  440020 system_pods.go:74] duration metric: took 3.795746ms to wait for pod list to return data ...
	I1025 09:54:17.500654  440020 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:54:17.503287  440020 default_sa.go:45] found service account: "default"
	I1025 09:54:17.503309  440020 default_sa.go:55] duration metric: took 2.649102ms for default service account to be created ...
	I1025 09:54:17.503319  440020 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:54:17.506326  440020 system_pods.go:86] 8 kube-system pods found
	I1025 09:54:17.506368  440020 system_pods.go:89] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:54:17.506374  440020 system_pods.go:89] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running
	I1025 09:54:17.506380  440020 system_pods.go:89] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:54:17.506390  440020 system_pods.go:89] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running
	I1025 09:54:17.506397  440020 system_pods.go:89] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running
	I1025 09:54:17.506400  440020 system_pods.go:89] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:54:17.506405  440020 system_pods.go:89] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running
	I1025 09:54:17.506410  440020 system_pods.go:89] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:54:17.506433  440020 retry.go:31] will retry after 188.876759ms: missing components: kube-dns
	I1025 09:54:17.700456  440020 system_pods.go:86] 8 kube-system pods found
	I1025 09:54:17.700546  440020 system_pods.go:89] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:54:17.700558  440020 system_pods.go:89] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running
	I1025 09:54:17.700568  440020 system_pods.go:89] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:54:17.700582  440020 system_pods.go:89] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running
	I1025 09:54:17.700588  440020 system_pods.go:89] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running
	I1025 09:54:17.700593  440020 system_pods.go:89] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:54:17.700599  440020 system_pods.go:89] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running
	I1025 09:54:17.700612  440020 system_pods.go:89] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:54:17.700632  440020 retry.go:31] will retry after 250.335068ms: missing components: kube-dns
	I1025 09:54:17.955256  440020 system_pods.go:86] 8 kube-system pods found
	I1025 09:54:17.955289  440020 system_pods.go:89] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Running
	I1025 09:54:17.955295  440020 system_pods.go:89] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running
	I1025 09:54:17.955298  440020 system_pods.go:89] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:54:17.955302  440020 system_pods.go:89] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running
	I1025 09:54:17.955307  440020 system_pods.go:89] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running
	I1025 09:54:17.955311  440020 system_pods.go:89] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:54:17.955314  440020 system_pods.go:89] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running
	I1025 09:54:17.955317  440020 system_pods.go:89] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Running
	I1025 09:54:17.955324  440020 system_pods.go:126] duration metric: took 451.999845ms to wait for k8s-apps to be running ...
	I1025 09:54:17.955332  440020 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:54:17.955420  440020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:54:17.970053  440020 system_svc.go:56] duration metric: took 14.706919ms WaitForService to wait for kubelet
	I1025 09:54:17.970086  440020 kubeadm.go:586] duration metric: took 11.926881356s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:54:17.970111  440020 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:54:17.973494  440020 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:54:17.973526  440020 node_conditions.go:123] node cpu capacity is 8
	I1025 09:54:17.973543  440020 node_conditions.go:105] duration metric: took 3.426431ms to run NodePressure ...
	I1025 09:54:17.973558  440020 start.go:241] waiting for startup goroutines ...
	I1025 09:54:17.973567  440020 start.go:246] waiting for cluster config update ...
	I1025 09:54:17.973582  440020 start.go:255] writing updated cluster config ...
	I1025 09:54:17.973852  440020 ssh_runner.go:195] Run: rm -f paused
	I1025 09:54:17.978265  440020 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:54:17.982758  440020 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4w68k" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:17.987122  440020 pod_ready.go:94] pod "coredns-66bc5c9577-4w68k" is "Ready"
	I1025 09:54:17.987148  440020 pod_ready.go:86] duration metric: took 4.365303ms for pod "coredns-66bc5c9577-4w68k" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:17.989310  440020 pod_ready.go:83] waiting for pod "etcd-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:17.993594  440020 pod_ready.go:94] pod "etcd-embed-certs-846915" is "Ready"
	I1025 09:54:17.993619  440020 pod_ready.go:86] duration metric: took 4.284136ms for pod "etcd-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:17.995810  440020 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:17.999546  440020 pod_ready.go:94] pod "kube-apiserver-embed-certs-846915" is "Ready"
	I1025 09:54:17.999606  440020 pod_ready.go:86] duration metric: took 3.774304ms for pod "kube-apiserver-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:18.001621  440020 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:18.384665  440020 pod_ready.go:94] pod "kube-controller-manager-embed-certs-846915" is "Ready"
	I1025 09:54:18.384701  440020 pod_ready.go:86] duration metric: took 383.060784ms for pod "kube-controller-manager-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:18.583914  440020 pod_ready.go:83] waiting for pod "kube-proxy-kfqqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:18.982945  440020 pod_ready.go:94] pod "kube-proxy-kfqqh" is "Ready"
	I1025 09:54:18.982973  440020 pod_ready.go:86] duration metric: took 399.034255ms for pod "kube-proxy-kfqqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:19.184109  440020 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:19.584000  440020 pod_ready.go:94] pod "kube-scheduler-embed-certs-846915" is "Ready"
	I1025 09:54:19.584035  440020 pod_ready.go:86] duration metric: took 399.892029ms for pod "kube-scheduler-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:19.584051  440020 pod_ready.go:40] duration metric: took 1.605758265s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:54:19.650747  440020 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:54:19.652803  440020 out.go:179] * Done! kubectl is now configured to use "embed-certs-846915" cluster and "default" namespace by default
	W1025 09:54:16.068318  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	W1025 09:54:18.567974  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	I1025 09:54:18.301621  445741 pod_ready.go:94] pod "coredns-66bc5c9577-sw9hv" is "Ready"
	I1025 09:54:18.301648  445741 pod_ready.go:86] duration metric: took 9.506322482s for pod "coredns-66bc5c9577-sw9hv" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:18.304547  445741 pod_ready.go:83] waiting for pod "etcd-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:54:20.312171  445741 pod_ready.go:104] pod "etcd-no-preload-656799" is not "Ready", error: <nil>
	I1025 09:54:21.809723  445741 pod_ready.go:94] pod "etcd-no-preload-656799" is "Ready"
	I1025 09:54:21.809749  445741 pod_ready.go:86] duration metric: took 3.505178884s for pod "etcd-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:21.812231  445741 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:21.816695  445741 pod_ready.go:94] pod "kube-apiserver-no-preload-656799" is "Ready"
	I1025 09:54:21.816722  445741 pod_ready.go:86] duration metric: took 4.466508ms for pod "kube-apiserver-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:21.819011  445741 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:21.823589  445741 pod_ready.go:94] pod "kube-controller-manager-no-preload-656799" is "Ready"
	I1025 09:54:21.823628  445741 pod_ready.go:86] duration metric: took 4.593239ms for pod "kube-controller-manager-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:21.825939  445741 pod_ready.go:83] waiting for pod "kube-proxy-gfph2" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:22.010836  445741 pod_ready.go:94] pod "kube-proxy-gfph2" is "Ready"
	I1025 09:54:22.010862  445741 pod_ready.go:86] duration metric: took 184.902324ms for pod "kube-proxy-gfph2" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:22.210739  445741 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:22.608665  445741 pod_ready.go:94] pod "kube-scheduler-no-preload-656799" is "Ready"
	I1025 09:54:22.608695  445741 pod_ready.go:86] duration metric: took 397.92747ms for pod "kube-scheduler-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:22.608710  445741 pod_ready.go:40] duration metric: took 13.818887723s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:54:22.670288  445741 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:54:22.672465  445741 out.go:179] * Done! kubectl is now configured to use "no-preload-656799" cluster and "default" namespace by default
	I1025 09:54:19.494507  449952 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-880773" ...
	I1025 09:54:19.494587  449952 cli_runner.go:164] Run: docker start default-k8s-diff-port-880773
	I1025 09:54:19.824726  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:19.851116  449952 kic.go:430] container "default-k8s-diff-port-880773" state is running.
	I1025 09:54:19.851830  449952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-880773
	I1025 09:54:19.874663  449952 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/config.json ...
	I1025 09:54:19.874958  449952 machine.go:93] provisionDockerMachine start ...
	I1025 09:54:19.875036  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:19.900142  449952 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:19.900490  449952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33250 <nil> <nil>}
	I1025 09:54:19.900509  449952 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:54:19.901160  449952 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54890->127.0.0.1:33250: read: connection reset by peer
	I1025 09:54:23.064068  449952 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-880773
	
	I1025 09:54:23.064110  449952 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-880773"
	I1025 09:54:23.064192  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:23.086772  449952 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:23.087065  449952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33250 <nil> <nil>}
	I1025 09:54:23.087087  449952 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-880773 && echo "default-k8s-diff-port-880773" | sudo tee /etc/hostname
	I1025 09:54:23.252426  449952 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-880773
	
	I1025 09:54:23.252521  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:23.273044  449952 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:23.273316  449952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33250 <nil> <nil>}
	I1025 09:54:23.273335  449952 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-880773' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-880773/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-880773' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:54:23.424572  449952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:54:23.424603  449952 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 09:54:23.424629  449952 ubuntu.go:190] setting up certificates
	I1025 09:54:23.424642  449952 provision.go:84] configureAuth start
	I1025 09:54:23.424716  449952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-880773
	I1025 09:54:23.447850  449952 provision.go:143] copyHostCerts
	I1025 09:54:23.447922  449952 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem, removing ...
	I1025 09:54:23.447939  449952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:54:23.448010  449952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 09:54:23.448121  449952 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem, removing ...
	I1025 09:54:23.448133  449952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:54:23.448172  449952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 09:54:23.448307  449952 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem, removing ...
	I1025 09:54:23.448322  449952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:54:23.448386  449952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 09:54:23.448466  449952 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-880773 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-880773 localhost minikube]
	I1025 09:54:23.670392  449952 provision.go:177] copyRemoteCerts
	I1025 09:54:23.670473  449952 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:54:23.670534  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:23.695861  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:23.810003  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:54:23.831919  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1025 09:54:23.855020  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:54:23.876651  449952 provision.go:87] duration metric: took 451.986685ms to configureAuth
	I1025 09:54:23.876682  449952 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:54:23.876901  449952 config.go:182] Loaded profile config "default-k8s-diff-port-880773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:23.877015  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:23.898381  449952 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:23.898653  449952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33250 <nil> <nil>}
	I1025 09:54:23.898684  449952 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1025 09:54:20.568510  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	W1025 09:54:22.569444  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	W1025 09:54:25.068911  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	I1025 09:54:24.748214  449952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:54:24.748254  449952 machine.go:96] duration metric: took 4.873275374s to provisionDockerMachine
	I1025 09:54:24.748278  449952 start.go:293] postStartSetup for "default-k8s-diff-port-880773" (driver="docker")
	I1025 09:54:24.748293  449952 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:54:24.748387  449952 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:54:24.748520  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:24.768940  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:24.873795  449952 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:54:24.877543  449952 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:54:24.877575  449952 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:54:24.877589  449952 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 09:54:24.877661  449952 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 09:54:24.877782  449952 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> 1341452.pem in /etc/ssl/certs
	I1025 09:54:24.877958  449952 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:54:24.887735  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:54:24.906567  449952 start.go:296] duration metric: took 158.269737ms for postStartSetup
	I1025 09:54:24.906638  449952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:54:24.906671  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:24.925060  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:25.024684  449952 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:54:25.029312  449952 fix.go:56] duration metric: took 5.559580439s for fixHost
	I1025 09:54:25.029335  449952 start.go:83] releasing machines lock for "default-k8s-diff-port-880773", held for 5.559626356s
	I1025 09:54:25.029412  449952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-880773
	I1025 09:54:25.053651  449952 ssh_runner.go:195] Run: cat /version.json
	I1025 09:54:25.053671  449952 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:54:25.053710  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:25.053740  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:25.076792  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:25.077574  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:25.177839  449952 ssh_runner.go:195] Run: systemctl --version
	I1025 09:54:25.232420  449952 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:54:25.269857  449952 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:54:25.274931  449952 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:54:25.275022  449952 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:54:25.283809  449952 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:54:25.283844  449952 start.go:495] detecting cgroup driver to use...
	I1025 09:54:25.283873  449952 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:54:25.283907  449952 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:54:25.298715  449952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:54:25.311114  449952 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:54:25.311179  449952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:54:25.326245  449952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:54:25.338983  449952 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:54:25.421886  449952 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:54:25.507785  449952 docker.go:234] disabling docker service ...
	I1025 09:54:25.507851  449952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:54:25.522758  449952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:54:25.535545  449952 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:54:25.624987  449952 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:54:25.708591  449952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:54:25.721462  449952 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:54:25.736203  449952 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:54:25.736286  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.745513  449952 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:54:25.745572  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.754426  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.763537  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.772424  449952 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:54:25.780767  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.789663  449952 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.798468  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.807406  449952 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:54:25.815004  449952 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:54:25.822998  449952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:25.903676  449952 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:54:26.020906  449952 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:54:26.020973  449952 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:54:26.025150  449952 start.go:563] Will wait 60s for crictl version
	I1025 09:54:26.025208  449952 ssh_runner.go:195] Run: which crictl
	I1025 09:54:26.029013  449952 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:54:26.057753  449952 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:54:26.057819  449952 ssh_runner.go:195] Run: crio --version
	I1025 09:54:26.086687  449952 ssh_runner.go:195] Run: crio --version
	I1025 09:54:26.116337  449952 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:54:26.117443  449952 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-880773 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:54:26.135714  449952 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1025 09:54:26.140427  449952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:54:26.154403  449952 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-880773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:54:26.154570  449952 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:54:26.154635  449952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:54:26.192928  449952 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:54:26.192961  449952 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:54:26.193024  449952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:54:26.221578  449952 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:54:26.221602  449952 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:54:26.221611  449952 kubeadm.go:934] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1025 09:54:26.221708  449952 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-880773 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:54:26.221767  449952 ssh_runner.go:195] Run: crio config
	I1025 09:54:26.266519  449952 cni.go:84] Creating CNI manager for ""
	I1025 09:54:26.266551  449952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:54:26.266577  449952 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:54:26.266705  449952 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-880773 NodeName:default-k8s-diff-port-880773 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:54:26.266942  449952 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-880773"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:54:26.267030  449952 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:54:26.276099  449952 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:54:26.276158  449952 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:54:26.283856  449952 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1025 09:54:26.296736  449952 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:54:26.309600  449952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1025 09:54:26.322267  449952 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:54:26.325950  449952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:54:26.336085  449952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:26.418603  449952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:54:26.445329  449952 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773 for IP: 192.168.94.2
	I1025 09:54:26.445370  449952 certs.go:195] generating shared ca certs ...
	I1025 09:54:26.445391  449952 certs.go:227] acquiring lock for ca certs: {Name:mk84f00dc0ba6e3a6eb84ff47b0ea60692217fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:26.445589  449952 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key
	I1025 09:54:26.445651  449952 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key
	I1025 09:54:26.445663  449952 certs.go:257] generating profile certs ...
	I1025 09:54:26.445763  449952 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/client.key
	I1025 09:54:26.445836  449952 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.key.bf049977
	I1025 09:54:26.445889  449952 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/proxy-client.key
	I1025 09:54:26.446021  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem (1338 bytes)
	W1025 09:54:26.446059  449952 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145_empty.pem, impossibly tiny 0 bytes
	I1025 09:54:26.446071  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:54:26.446100  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:54:26.446130  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:54:26.446159  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem (1675 bytes)
	I1025 09:54:26.446208  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:54:26.447082  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:54:26.467801  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:54:26.487512  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:54:26.507419  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:54:26.531864  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 09:54:26.550342  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:54:26.569273  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:54:26.587593  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:54:26.605286  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /usr/share/ca-certificates/1341452.pem (1708 bytes)
	I1025 09:54:26.623801  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:54:26.642803  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem --> /usr/share/ca-certificates/134145.pem (1338 bytes)
	I1025 09:54:26.660752  449952 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:54:26.674006  449952 ssh_runner.go:195] Run: openssl version
	I1025 09:54:26.680368  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:54:26.689226  449952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:26.693134  449952 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:59 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:26.693180  449952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:26.728010  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:54:26.736810  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134145.pem && ln -fs /usr/share/ca-certificates/134145.pem /etc/ssl/certs/134145.pem"
	I1025 09:54:26.746043  449952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134145.pem
	I1025 09:54:26.749893  449952 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:05 /usr/share/ca-certificates/134145.pem
	I1025 09:54:26.749943  449952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134145.pem
	I1025 09:54:26.785153  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134145.pem /etc/ssl/certs/51391683.0"
	I1025 09:54:26.794063  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1341452.pem && ln -fs /usr/share/ca-certificates/1341452.pem /etc/ssl/certs/1341452.pem"
	I1025 09:54:26.802929  449952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1341452.pem
	I1025 09:54:26.807038  449952 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:05 /usr/share/ca-certificates/1341452.pem
	I1025 09:54:26.807101  449952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1341452.pem
	I1025 09:54:26.844046  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1341452.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:54:26.852738  449952 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:54:26.856516  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:54:26.892058  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:54:26.928987  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:54:26.978149  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:54:27.021912  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:54:27.075255  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:54:27.132302  449952 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-880773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:27.132461  449952 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:54:27.132541  449952 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:54:27.166099  449952 cri.go:89] found id: "8a40c304121945c99334f375a4fc8f1073390b82cca6a44c6e2b224a5804ed43"
	I1025 09:54:27.166122  449952 cri.go:89] found id: "1099e940dc59e4a7fc6edf4f82c427fc4633cbc73d1759f0ef430fccd002219f"
	I1025 09:54:27.166131  449952 cri.go:89] found id: "b7360eb6624b8284557553c607130a8087e3690512dcc9caea4351f9f876fd02"
	I1025 09:54:27.166136  449952 cri.go:89] found id: "9a7e2aef555d4452a0b73ff6d39e556aaf40affe43c7adcaf8fc119b3910c298"
	I1025 09:54:27.166141  449952 cri.go:89] found id: ""
	I1025 09:54:27.166194  449952 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:54:27.179061  449952 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:54:27Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:54:27.179160  449952 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:54:27.188157  449952 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:54:27.188180  449952 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:54:27.188228  449952 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:54:27.196153  449952 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:54:27.197499  449952 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-880773" does not appear in /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:27.198480  449952 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-130604/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-880773" cluster setting kubeconfig missing "default-k8s-diff-port-880773" context setting]
	I1025 09:54:27.199935  449952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:27.202256  449952 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:54:27.210782  449952 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1025 09:54:27.210819  449952 kubeadm.go:601] duration metric: took 22.632727ms to restartPrimaryControlPlane
	I1025 09:54:27.210865  449952 kubeadm.go:402] duration metric: took 78.655845ms to StartCluster
	I1025 09:54:27.210883  449952 settings.go:142] acquiring lock: {Name:mke1e64be0ec6edf2eef6e52eb10d83b59bb8c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:27.210942  449952 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:27.213436  449952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:27.213678  449952 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:54:27.213737  449952 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:54:27.213844  449952 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-880773"
	I1025 09:54:27.213859  449952 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-880773"
	I1025 09:54:27.213875  449952 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-880773"
	I1025 09:54:27.213886  449952 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-880773"
	I1025 09:54:27.213891  449952 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-880773"
	W1025 09:54:27.213898  449952 addons.go:247] addon dashboard should already be in state true
	I1025 09:54:27.213936  449952 host.go:66] Checking if "default-k8s-diff-port-880773" exists ...
	I1025 09:54:27.213939  449952 config.go:182] Loaded profile config "default-k8s-diff-port-880773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:27.213866  449952 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-880773"
	W1025 09:54:27.214066  449952 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:54:27.214095  449952 host.go:66] Checking if "default-k8s-diff-port-880773" exists ...
	I1025 09:54:27.214261  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:27.214456  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:27.214610  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:27.216018  449952 out.go:179] * Verifying Kubernetes components...
	I1025 09:54:27.217234  449952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:27.239708  449952 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-880773"
	W1025 09:54:27.239738  449952 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:54:27.239770  449952 host.go:66] Checking if "default-k8s-diff-port-880773" exists ...
	I1025 09:54:27.240253  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:27.242481  449952 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 09:54:27.242489  449952 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:54:27.243627  449952 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:54:27.243645  449952 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 09:54:27.243651  449952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:54:27.243712  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:27.247468  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 09:54:27.247486  449952 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 09:54:27.247539  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:27.267591  449952 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:54:27.267622  449952 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:54:27.267686  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:27.276575  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:27.285081  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:27.298498  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:27.368890  449952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:54:27.383755  449952 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-880773" to be "Ready" ...
	I1025 09:54:27.395977  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 09:54:27.396003  449952 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 09:54:27.406130  449952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:54:27.411552  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 09:54:27.411662  449952 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 09:54:27.419928  449952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:54:27.427159  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 09:54:27.427182  449952 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 09:54:27.446072  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 09:54:27.446100  449952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 09:54:27.471003  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 09:54:27.471033  449952 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 09:54:27.488999  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 09:54:27.489025  449952 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 09:54:27.503088  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 09:54:27.503113  449952 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 09:54:27.517184  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 09:54:27.517212  449952 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 09:54:27.530517  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:54:27.530540  449952 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 09:54:27.545962  449952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:54:29.018628  449952 node_ready.go:49] node "default-k8s-diff-port-880773" is "Ready"
	I1025 09:54:29.018668  449952 node_ready.go:38] duration metric: took 1.634880084s for node "default-k8s-diff-port-880773" to be "Ready" ...
	I1025 09:54:29.018686  449952 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:54:29.018740  449952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:54:29.506034  449952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.099869063s)
	I1025 09:54:29.506102  449952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.086134972s)
	I1025 09:54:29.506180  449952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.960181276s)
	I1025 09:54:29.506238  449952 api_server.go:72] duration metric: took 2.292529535s to wait for apiserver process to appear ...
	I1025 09:54:29.506289  449952 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:54:29.506306  449952 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1025 09:54:29.507716  449952 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-880773 addons enable metrics-server
	
	I1025 09:54:29.513028  449952 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:54:29.513055  449952 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:54:29.514792  449952 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1025 09:54:27.071249  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	W1025 09:54:29.568141  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	I1025 09:54:29.515891  449952 addons.go:514] duration metric: took 2.302163358s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1025 09:54:30.007035  449952 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1025 09:54:30.013495  449952 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:54:30.013618  449952 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:54:30.507293  449952 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1025 09:54:30.511406  449952 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1025 09:54:30.512375  449952 api_server.go:141] control plane version: v1.34.1
	I1025 09:54:30.512397  449952 api_server.go:131] duration metric: took 1.006101961s to wait for apiserver health ...
	I1025 09:54:30.512405  449952 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:54:30.515834  449952 system_pods.go:59] 8 kube-system pods found
	I1025 09:54:30.515887  449952 system_pods.go:61] "coredns-66bc5c9577-29ltg" [5d5247ec-619e-4bcb-82c5-1d5c0b42b685] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:54:30.515906  449952 system_pods.go:61] "etcd-default-k8s-diff-port-880773" [abe5a2b4-061a-47af-9c04-41b3261607b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:54:30.515928  449952 system_pods.go:61] "kindnet-cnqn8" [c804731f-754b-4ce1-9609-1a6fc8cf317c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 09:54:30.515939  449952 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-880773" [e8188321-7de4-49f4-97f9-e7aeca6d00db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:54:30.515950  449952 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-880773" [29ba481f-eea8-41cb-bbde-2551ae253f54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:54:30.515961  449952 system_pods.go:61] "kube-proxy-bg94v" [4b7ad6fe-03c3-41dd-9633-6ed6a648201f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 09:54:30.515973  449952 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-880773" [952c634f-45b2-401d-9a90-6d2123e839ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:54:30.515981  449952 system_pods.go:61] "storage-provisioner" [469fcc4c-281e-4595-aa3b-4ea853afb153] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:54:30.515998  449952 system_pods.go:74] duration metric: took 3.581249ms to wait for pod list to return data ...
	I1025 09:54:30.516008  449952 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:54:30.518119  449952 default_sa.go:45] found service account: "default"
	I1025 09:54:30.518138  449952 default_sa.go:55] duration metric: took 2.123947ms for default service account to be created ...
	I1025 09:54:30.518148  449952 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:54:30.520334  449952 system_pods.go:86] 8 kube-system pods found
	I1025 09:54:30.520372  449952 system_pods.go:89] "coredns-66bc5c9577-29ltg" [5d5247ec-619e-4bcb-82c5-1d5c0b42b685] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:54:30.520410  449952 system_pods.go:89] "etcd-default-k8s-diff-port-880773" [abe5a2b4-061a-47af-9c04-41b3261607b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:54:30.520421  449952 system_pods.go:89] "kindnet-cnqn8" [c804731f-754b-4ce1-9609-1a6fc8cf317c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 09:54:30.520430  449952 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-880773" [e8188321-7de4-49f4-97f9-e7aeca6d00db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:54:30.520439  449952 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-880773" [29ba481f-eea8-41cb-bbde-2551ae253f54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:54:30.520446  449952 system_pods.go:89] "kube-proxy-bg94v" [4b7ad6fe-03c3-41dd-9633-6ed6a648201f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 09:54:30.520452  449952 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-880773" [952c634f-45b2-401d-9a90-6d2123e839ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:54:30.520458  449952 system_pods.go:89] "storage-provisioner" [469fcc4c-281e-4595-aa3b-4ea853afb153] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:54:30.520464  449952 system_pods.go:126] duration metric: took 2.311292ms to wait for k8s-apps to be running ...
	I1025 09:54:30.520472  449952 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:54:30.520522  449952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:54:30.533459  449952 system_svc.go:56] duration metric: took 12.977874ms WaitForService to wait for kubelet
	I1025 09:54:30.533492  449952 kubeadm.go:586] duration metric: took 3.319782027s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:54:30.533514  449952 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:54:30.536489  449952 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:54:30.536518  449952 node_conditions.go:123] node cpu capacity is 8
	I1025 09:54:30.536536  449952 node_conditions.go:105] duration metric: took 3.015821ms to run NodePressure ...
	I1025 09:54:30.536552  449952 start.go:241] waiting for startup goroutines ...
	I1025 09:54:30.536565  449952 start.go:246] waiting for cluster config update ...
	I1025 09:54:30.536584  449952 start.go:255] writing updated cluster config ...
	I1025 09:54:30.536891  449952 ssh_runner.go:195] Run: rm -f paused
	I1025 09:54:30.540962  449952 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:54:30.544202  449952 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-29ltg" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:54:32.550284  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:54:32.069496  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	I1025 09:54:34.069854  441651 pod_ready.go:94] pod "coredns-5dd5756b68-qffxt" is "Ready"
	I1025 09:54:34.069885  441651 pod_ready.go:86] duration metric: took 37.507966247s for pod "coredns-5dd5756b68-qffxt" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.074076  441651 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.080201  441651 pod_ready.go:94] pod "etcd-old-k8s-version-676314" is "Ready"
	I1025 09:54:34.080244  441651 pod_ready.go:86] duration metric: took 6.136939ms for pod "etcd-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.084014  441651 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.089480  441651 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-676314" is "Ready"
	I1025 09:54:34.089513  441651 pod_ready.go:86] duration metric: took 5.467331ms for pod "kube-apiserver-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.092917  441651 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.266390  441651 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-676314" is "Ready"
	I1025 09:54:34.266419  441651 pod_ready.go:86] duration metric: took 173.473814ms for pod "kube-controller-manager-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.468281  441651 pod_ready.go:83] waiting for pod "kube-proxy-bsxx6" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.866765  441651 pod_ready.go:94] pod "kube-proxy-bsxx6" is "Ready"
	I1025 09:54:34.866794  441651 pod_ready.go:86] duration metric: took 398.483847ms for pod "kube-proxy-bsxx6" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:35.067296  441651 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:35.466580  441651 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-676314" is "Ready"
	I1025 09:54:35.466609  441651 pod_ready.go:86] duration metric: took 399.280578ms for pod "kube-scheduler-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:35.466637  441651 pod_ready.go:40] duration metric: took 38.910774112s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:54:35.520724  441651 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1025 09:54:35.522478  441651 out.go:203] 
	W1025 09:54:35.525673  441651 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1025 09:54:35.527157  441651 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1025 09:54:35.528391  441651 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-676314" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:54:22 no-preload-656799 crio[557]: time="2025-10-25T09:54:22.025206407Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:22 no-preload-656799 crio[557]: time="2025-10-25T09:54:22.064993644Z" level=info msg="Created container f3921e136c33c717080e599280d369230d5c8c4d560187b222fd092310a533b7: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-45hnf/kubernetes-dashboard" id=ceb53ef9-7d42-4a29-9f9e-73cd532c700b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:22 no-preload-656799 crio[557]: time="2025-10-25T09:54:22.065755223Z" level=info msg="Starting container: f3921e136c33c717080e599280d369230d5c8c4d560187b222fd092310a533b7" id=271c9f37-2846-4a95-9812-5b09e1f2a3f3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:54:22 no-preload-656799 crio[557]: time="2025-10-25T09:54:22.068002931Z" level=info msg="Started container" PID=1503 containerID=f3921e136c33c717080e599280d369230d5c8c4d560187b222fd092310a533b7 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-45hnf/kubernetes-dashboard id=271c9f37-2846-4a95-9812-5b09e1f2a3f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=395df48cb744d03089d4e34fbf8d6efd162aaba2d4dddfff463b3213b7b7dea9
	Oct 25 09:54:24 no-preload-656799 crio[557]: time="2025-10-25T09:54:24.816854328Z" level=info msg="Pulled image: registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=87278037-c133-4238-a499-405875d94ec7 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:54:24 no-preload-656799 crio[557]: time="2025-10-25T09:54:24.817497359Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=afbbbceb-6f25-484f-b23a-6aeb276df16c name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:24 no-preload-656799 crio[557]: time="2025-10-25T09:54:24.820101898Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d08771ea-08c9-441c-b7ec-9a0cd9812509 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:24 no-preload-656799 crio[557]: time="2025-10-25T09:54:24.825480734Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh/dashboard-metrics-scraper" id=dbe9e535-c1a2-4276-a6e8-13cef1da9465 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:24 no-preload-656799 crio[557]: time="2025-10-25T09:54:24.825610104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:24 no-preload-656799 crio[557]: time="2025-10-25T09:54:24.832113776Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:24 no-preload-656799 crio[557]: time="2025-10-25T09:54:24.832643461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:24 no-preload-656799 crio[557]: time="2025-10-25T09:54:24.861362196Z" level=info msg="Created container 469537152a37ffeb3adaf7d8d76d7a7b7d6f0f4e48dd494931507f69f4b88ce6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh/dashboard-metrics-scraper" id=dbe9e535-c1a2-4276-a6e8-13cef1da9465 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:24 no-preload-656799 crio[557]: time="2025-10-25T09:54:24.861925302Z" level=info msg="Starting container: 469537152a37ffeb3adaf7d8d76d7a7b7d6f0f4e48dd494931507f69f4b88ce6" id=b53d67a3-f16c-4c66-b2ff-4c469dd0ed68 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:54:24 no-preload-656799 crio[557]: time="2025-10-25T09:54:24.863731509Z" level=info msg="Started container" PID=1744 containerID=469537152a37ffeb3adaf7d8d76d7a7b7d6f0f4e48dd494931507f69f4b88ce6 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh/dashboard-metrics-scraper id=b53d67a3-f16c-4c66-b2ff-4c469dd0ed68 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7083f13ed83c751d35867c2af1705840a6de654ddc4ce55e2dbc5c7af81808a6
	Oct 25 09:54:25 no-preload-656799 crio[557]: time="2025-10-25T09:54:25.035971341Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e6dcd91f-e1d9-46ad-b045-68e9a37bcaf2 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:25 no-preload-656799 crio[557]: time="2025-10-25T09:54:25.039031296Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=16edc3fa-4326-4089-bd97-c2f9465ea214 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:25 no-preload-656799 crio[557]: time="2025-10-25T09:54:25.042407706Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh/dashboard-metrics-scraper" id=8ee8468d-dd50-45e8-9812-ea9fd895b058 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:25 no-preload-656799 crio[557]: time="2025-10-25T09:54:25.042545002Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:25 no-preload-656799 crio[557]: time="2025-10-25T09:54:25.051738636Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:25 no-preload-656799 crio[557]: time="2025-10-25T09:54:25.052266877Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:25 no-preload-656799 crio[557]: time="2025-10-25T09:54:25.078311722Z" level=info msg="Created container 9c6377eb3e36d19cb28a7bba69a7291abdf8d5f49afb81570a6dee23440be4c8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh/dashboard-metrics-scraper" id=8ee8468d-dd50-45e8-9812-ea9fd895b058 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:25 no-preload-656799 crio[557]: time="2025-10-25T09:54:25.079082991Z" level=info msg="Starting container: 9c6377eb3e36d19cb28a7bba69a7291abdf8d5f49afb81570a6dee23440be4c8" id=9765ff80-b0fb-43d9-9896-74e8a94a45b4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:54:25 no-preload-656799 crio[557]: time="2025-10-25T09:54:25.081473758Z" level=info msg="Started container" PID=1755 containerID=9c6377eb3e36d19cb28a7bba69a7291abdf8d5f49afb81570a6dee23440be4c8 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh/dashboard-metrics-scraper id=9765ff80-b0fb-43d9-9896-74e8a94a45b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7083f13ed83c751d35867c2af1705840a6de654ddc4ce55e2dbc5c7af81808a6
	Oct 25 09:54:26 no-preload-656799 crio[557]: time="2025-10-25T09:54:26.041832895Z" level=info msg="Removing container: 469537152a37ffeb3adaf7d8d76d7a7b7d6f0f4e48dd494931507f69f4b88ce6" id=f2f23fc3-39ed-4eba-8b2e-3c28c625a607 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:54:26 no-preload-656799 crio[557]: time="2025-10-25T09:54:26.051800419Z" level=info msg="Removed container 469537152a37ffeb3adaf7d8d76d7a7b7d6f0f4e48dd494931507f69f4b88ce6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh/dashboard-metrics-scraper" id=f2f23fc3-39ed-4eba-8b2e-3c28c625a607 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	9c6377eb3e36d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   1                   7083f13ed83c7       dashboard-metrics-scraper-6ffb444bf9-qgpjh   kubernetes-dashboard
	f3921e136c33c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   17 seconds ago      Running             kubernetes-dashboard        0                   395df48cb744d       kubernetes-dashboard-855c9754f9-45hnf        kubernetes-dashboard
	9672e2b096219       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           28 seconds ago      Running             coredns                     0                   b34368269f37d       coredns-66bc5c9577-sw9hv                     kube-system
	f3a7d9ca625b7       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           28 seconds ago      Running             busybox                     1                   771a965a5a19c       busybox                                      default
	f4d5c57b415b1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           31 seconds ago      Exited              storage-provisioner         0                   eb71971fd35e6       storage-provisioner                          kube-system
	e995999ac2d28       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           31 seconds ago      Running             kindnet-cni                 0                   445fa6ed2f44b       kindnet-nbj7f                                kube-system
	891b68d0f8428       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           31 seconds ago      Running             kube-proxy                  0                   901fa2f0399cd       kube-proxy-gfph2                             kube-system
	a5016565fe92c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           34 seconds ago      Running             kube-controller-manager     0                   e0a9a0ead2e5c       kube-controller-manager-no-preload-656799    kube-system
	8094fc5d7b37a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           34 seconds ago      Running             kube-scheduler              0                   d1b1ff8bdf389       kube-scheduler-no-preload-656799             kube-system
	a6c43c376a4b3       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           34 seconds ago      Running             kube-apiserver              0                   15932ce04aeaa       kube-apiserver-no-preload-656799             kube-system
	a75cff0462b22       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           34 seconds ago      Running             etcd                        0                   924ec9fe168dc       etcd-no-preload-656799                       kube-system
	
	
	==> coredns [9672e2b09621917c8753e5c69bbf1397081ae463b4fcf497c6aa6562d4b475d8] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47145 - 19496 "HINFO IN 6985121326916034595.722092135468356731. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.07090186s
	
	
	==> describe nodes <==
	Name:               no-preload-656799
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-656799
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=no-preload-656799
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_53_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:53:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-656799
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:54:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:54:17 +0000   Sat, 25 Oct 2025 09:53:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:54:17 +0000   Sat, 25 Oct 2025 09:53:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:54:17 +0000   Sat, 25 Oct 2025 09:53:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:54:17 +0000   Sat, 25 Oct 2025 09:54:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-656799
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                5bcc7607-4d30-49cf-9ec1-c2712dc2e9c1
	  Boot ID:                    69cac88c-fbae-449a-9884-8eb99653f5b9
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 coredns-66bc5c9577-sw9hv                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     84s
	  kube-system                 etcd-no-preload-656799                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         89s
	  kube-system                 kindnet-nbj7f                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      85s
	  kube-system                 kube-apiserver-no-preload-656799              250m (3%)     0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-controller-manager-no-preload-656799     200m (2%)     0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-gfph2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-scheduler-no-preload-656799              100m (1%)     0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qgpjh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-45hnf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 84s                kube-proxy       
	  Normal  Starting                 31s                kube-proxy       
	  Normal  NodeHasSufficientMemory  89s                kubelet          Node no-preload-656799 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s                kubelet          Node no-preload-656799 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     89s                kubelet          Node no-preload-656799 status is now: NodeHasSufficientPID
	  Normal  Starting                 89s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           85s                node-controller  Node no-preload-656799 event: Registered Node no-preload-656799 in Controller
	  Normal  NodeReady                71s                kubelet          Node no-preload-656799 status is now: NodeReady
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 35s)  kubelet          Node no-preload-656799 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 35s)  kubelet          Node no-preload-656799 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 35s)  kubelet          Node no-preload-656799 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node no-preload-656799 event: Registered Node no-preload-656799 in Controller
	
	
	==> dmesg <==
	[  +0.000024] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[Oct25 09:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[ +17.952906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 b8 8e e3 56 c9 08 06
	[  +0.000656] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[Oct25 09:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	[ +20.335832] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +1.293644] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[Oct25 09:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 68 92 7c c6 14 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +0.270958] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a d0 7b 0e 4a 8d 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[ +10.676024] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000020] ll header: 00000000: ff ff ff ff ff ff 1a 10 31 a9 02 ae 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	
	
	==> etcd [a75cff0462b2260fa975ed411fc9a80d7004abff2c65effeccfd7e1fe5b26257] <==
	{"level":"warn","ts":"2025-10-25T09:54:06.779545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.787093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.802075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.809106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.815576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.823654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.830108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.837440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.845759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.854124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.863708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.884009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.897133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.903822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.910411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.917114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.923956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.930646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.937619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.946698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.962673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.966158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.973831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:06.981573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:07.042800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41556","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:54:40 up  1:37,  0 user,  load average: 6.33, 4.75, 2.87
	Linux no-preload-656799 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e995999ac2d28a730193b4932ce9f0a03b7388dd1c393907b0ad9b4e573b6329] <==
	I1025 09:54:08.598154       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:54:08.613750       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 09:54:08.614031       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:54:08.614056       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:54:08.614095       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:54:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:54:08.905050       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:54:08.905084       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:54:08.905100       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:54:08.905273       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:54:09.205437       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:54:09.205469       1 metrics.go:72] Registering metrics
	I1025 09:54:09.205560       1 controller.go:711] "Syncing nftables rules"
	I1025 09:54:18.905330       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:54:18.905442       1 main.go:301] handling current node
	I1025 09:54:28.905542       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:54:28.905599       1 main.go:301] handling current node
	I1025 09:54:38.914478       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:54:38.914528       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a6c43c376a4b3de0805237ed87bb2bed809e8771389a9c4f6da15c3125a99803] <==
	I1025 09:54:07.591671       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 09:54:07.592774       1 aggregator.go:171] initial CRD sync complete...
	I1025 09:54:07.592795       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 09:54:07.593001       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:54:07.593034       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:54:07.593593       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 09:54:07.593859       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 09:54:07.593953       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:54:07.602438       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:54:07.608918       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:54:07.621210       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 09:54:07.621260       1 policy_source.go:240] refreshing policies
	I1025 09:54:07.629862       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:54:07.988312       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:54:08.033373       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:54:08.047189       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:54:08.077172       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:54:08.090903       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:54:08.155627       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.34.159"}
	I1025 09:54:08.174887       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.90.56"}
	I1025 09:54:08.495143       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:54:11.286450       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:54:11.435698       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:54:11.435698       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:54:11.532990       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a5016565fe92c6c3e2b7f15714ef4e22e9a01067673cac39fa54fcac388a2b87] <==
	I1025 09:54:10.929989       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:54:10.930242       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 09:54:10.930401       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:54:10.930486       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:54:10.930626       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:54:10.930690       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:54:10.930690       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 09:54:10.933499       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:54:10.934948       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 09:54:10.934998       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 09:54:10.935004       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:54:10.935026       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 09:54:10.935035       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 09:54:10.935040       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 09:54:10.937401       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:54:10.937422       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:54:10.937408       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:54:10.937610       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:54:10.940866       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:54:10.944160       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:54:10.946434       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:54:10.949637       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:54:10.949800       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:54:10.951905       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:54:20.882947       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [891b68d0f84289dab3ab047662084fe3d552922e5f89141313e5f0f5b1b1c532] <==
	I1025 09:54:08.391606       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:54:08.464928       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:54:08.565612       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:54:08.565656       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 09:54:08.565766       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:54:08.590332       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:54:08.590485       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:54:08.598021       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:54:08.598450       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:54:08.598470       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:54:08.600483       1 config.go:309] "Starting node config controller"
	I1025 09:54:08.600503       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:54:08.600615       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:54:08.600643       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:54:08.600704       1 config.go:200] "Starting service config controller"
	I1025 09:54:08.600710       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:54:08.600744       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:54:08.600749       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:54:08.701611       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:54:08.701675       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:54:08.701694       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:54:08.701708       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8094fc5d7b37a8f46ff289c9c571c8256e7a44a478343b03510438967ec370e0] <==
	I1025 09:54:06.820529       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:54:07.531182       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:54:07.531244       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:54:07.531258       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:54:07.531267       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:54:07.563460       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:54:07.563496       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:54:07.566885       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:54:07.567015       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:54:07.570237       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:54:07.570285       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:54:07.667459       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:54:08 no-preload-656799 kubelet[707]: E1025 09:54:08.648721     707 projected.go:196] Error preparing data for projected volume kube-api-access-hc4xt for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Oct 25 09:54:08 no-preload-656799 kubelet[707]: E1025 09:54:08.648808     707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e58484e4-93ad-4c1e-af87-8034efb88486-kube-api-access-hc4xt podName:e58484e4-93ad-4c1e-af87-8034efb88486 nodeName:}" failed. No retries permitted until 2025-10-25 09:54:09.648783827 +0000 UTC m=+4.787245494 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hc4xt" (UniqueName: "kubernetes.io/projected/e58484e4-93ad-4c1e-af87-8034efb88486-kube-api-access-hc4xt") pod "busybox" (UID: "e58484e4-93ad-4c1e-af87-8034efb88486") : object "default"/"kube-root-ca.crt" not registered
	Oct 25 09:54:09 no-preload-656799 kubelet[707]: E1025 09:54:09.553719     707 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 25 09:54:09 no-preload-656799 kubelet[707]: E1025 09:54:09.553848     707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b8784813-9a51-43f5-ae3a-d5f9a1cd7d41-config-volume podName:b8784813-9a51-43f5-ae3a-d5f9a1cd7d41 nodeName:}" failed. No retries permitted until 2025-10-25 09:54:11.553822833 +0000 UTC m=+6.692284498 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b8784813-9a51-43f5-ae3a-d5f9a1cd7d41-config-volume") pod "coredns-66bc5c9577-sw9hv" (UID: "b8784813-9a51-43f5-ae3a-d5f9a1cd7d41") : object "kube-system"/"coredns" not registered
	Oct 25 09:54:09 no-preload-656799 kubelet[707]: E1025 09:54:09.654903     707 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Oct 25 09:54:09 no-preload-656799 kubelet[707]: E1025 09:54:09.654938     707 projected.go:196] Error preparing data for projected volume kube-api-access-hc4xt for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Oct 25 09:54:09 no-preload-656799 kubelet[707]: E1025 09:54:09.655002     707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e58484e4-93ad-4c1e-af87-8034efb88486-kube-api-access-hc4xt podName:e58484e4-93ad-4c1e-af87-8034efb88486 nodeName:}" failed. No retries permitted until 2025-10-25 09:54:11.654987813 +0000 UTC m=+6.793449464 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hc4xt" (UniqueName: "kubernetes.io/projected/e58484e4-93ad-4c1e-af87-8034efb88486-kube-api-access-hc4xt") pod "busybox" (UID: "e58484e4-93ad-4c1e-af87-8034efb88486") : object "default"/"kube-root-ca.crt" not registered
	Oct 25 09:54:16 no-preload-656799 kubelet[707]: I1025 09:54:16.860921     707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 09:54:18 no-preload-656799 kubelet[707]: I1025 09:54:18.004895     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jr9z\" (UniqueName: \"kubernetes.io/projected/4bfa16b2-fe16-47c9-8bd7-63c64dae30ac-kube-api-access-7jr9z\") pod \"kubernetes-dashboard-855c9754f9-45hnf\" (UID: \"4bfa16b2-fe16-47c9-8bd7-63c64dae30ac\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-45hnf"
	Oct 25 09:54:18 no-preload-656799 kubelet[707]: I1025 09:54:18.004963     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9d8dbc62-2ba1-4794-baab-600f510e30ab-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-qgpjh\" (UID: \"9d8dbc62-2ba1-4794-baab-600f510e30ab\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh"
	Oct 25 09:54:18 no-preload-656799 kubelet[707]: I1025 09:54:18.005095     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfpbm\" (UniqueName: \"kubernetes.io/projected/9d8dbc62-2ba1-4794-baab-600f510e30ab-kube-api-access-hfpbm\") pod \"dashboard-metrics-scraper-6ffb444bf9-qgpjh\" (UID: \"9d8dbc62-2ba1-4794-baab-600f510e30ab\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh"
	Oct 25 09:54:18 no-preload-656799 kubelet[707]: I1025 09:54:18.005147     707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4bfa16b2-fe16-47c9-8bd7-63c64dae30ac-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-45hnf\" (UID: \"4bfa16b2-fe16-47c9-8bd7-63c64dae30ac\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-45hnf"
	Oct 25 09:54:25 no-preload-656799 kubelet[707]: I1025 09:54:25.035534     707 scope.go:117] "RemoveContainer" containerID="469537152a37ffeb3adaf7d8d76d7a7b7d6f0f4e48dd494931507f69f4b88ce6"
	Oct 25 09:54:25 no-preload-656799 kubelet[707]: I1025 09:54:25.047215     707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-45hnf" podStartSLOduration=10.295019954 podStartE2EDuration="14.047196257s" podCreationTimestamp="2025-10-25 09:54:11 +0000 UTC" firstStartedPulling="2025-10-25 09:54:18.262795437 +0000 UTC m=+13.401257088" lastFinishedPulling="2025-10-25 09:54:22.01497174 +0000 UTC m=+17.153433391" observedRunningTime="2025-10-25 09:54:23.044930319 +0000 UTC m=+18.183391992" watchObservedRunningTime="2025-10-25 09:54:25.047196257 +0000 UTC m=+20.185657929"
	Oct 25 09:54:26 no-preload-656799 kubelet[707]: I1025 09:54:26.040338     707 scope.go:117] "RemoveContainer" containerID="469537152a37ffeb3adaf7d8d76d7a7b7d6f0f4e48dd494931507f69f4b88ce6"
	Oct 25 09:54:26 no-preload-656799 kubelet[707]: I1025 09:54:26.040640     707 scope.go:117] "RemoveContainer" containerID="9c6377eb3e36d19cb28a7bba69a7291abdf8d5f49afb81570a6dee23440be4c8"
	Oct 25 09:54:26 no-preload-656799 kubelet[707]: E1025 09:54:26.040838     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qgpjh_kubernetes-dashboard(9d8dbc62-2ba1-4794-baab-600f510e30ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh" podUID="9d8dbc62-2ba1-4794-baab-600f510e30ab"
	Oct 25 09:54:27 no-preload-656799 kubelet[707]: I1025 09:54:27.047458     707 scope.go:117] "RemoveContainer" containerID="9c6377eb3e36d19cb28a7bba69a7291abdf8d5f49afb81570a6dee23440be4c8"
	Oct 25 09:54:27 no-preload-656799 kubelet[707]: E1025 09:54:27.047720     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qgpjh_kubernetes-dashboard(9d8dbc62-2ba1-4794-baab-600f510e30ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh" podUID="9d8dbc62-2ba1-4794-baab-600f510e30ab"
	Oct 25 09:54:28 no-preload-656799 kubelet[707]: I1025 09:54:28.230192     707 scope.go:117] "RemoveContainer" containerID="9c6377eb3e36d19cb28a7bba69a7291abdf8d5f49afb81570a6dee23440be4c8"
	Oct 25 09:54:28 no-preload-656799 kubelet[707]: E1025 09:54:28.230440     707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qgpjh_kubernetes-dashboard(9d8dbc62-2ba1-4794-baab-600f510e30ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qgpjh" podUID="9d8dbc62-2ba1-4794-baab-600f510e30ab"
	Oct 25 09:54:34 no-preload-656799 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:54:35 no-preload-656799 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:54:35 no-preload-656799 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 09:54:35 no-preload-656799 systemd[1]: kubelet.service: Consumed 1.234s CPU time.
	
	
	==> kubernetes-dashboard [f3921e136c33c717080e599280d369230d5c8c4d560187b222fd092310a533b7] <==
	2025/10/25 09:54:22 Using namespace: kubernetes-dashboard
	2025/10/25 09:54:22 Using in-cluster config to connect to apiserver
	2025/10/25 09:54:22 Using secret token for csrf signing
	2025/10/25 09:54:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:54:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:54:22 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 09:54:22 Generating JWE encryption key
	2025/10/25 09:54:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:54:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:54:22 Initializing JWE encryption key from synchronized object
	2025/10/25 09:54:22 Creating in-cluster Sidecar client
	2025/10/25 09:54:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:54:22 Serving insecurely on HTTP port: 9090
	2025/10/25 09:54:22 Starting overwatch
	
	
	==> storage-provisioner [f4d5c57b415b11d71b55538ee6f875fbd2524c78bbfa0f6e22f11fbe7622f2fb] <==
	I1025 09:54:08.352829       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:54:38.356769       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-656799 -n no-preload-656799
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-656799 -n no-preload-656799: exit status 2 (325.062684ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-656799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-676314 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-676314 --alsologtostderr -v=1: exit status 80 (2.106030281s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-676314 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:54:47.250805  456136 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:54:47.251053  456136 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:47.251061  456136 out.go:374] Setting ErrFile to fd 2...
	I1025 09:54:47.251066  456136 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:47.251283  456136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:54:47.251561  456136 out.go:368] Setting JSON to false
	I1025 09:54:47.251600  456136 mustload.go:65] Loading cluster: old-k8s-version-676314
	I1025 09:54:47.251916  456136 config.go:182] Loaded profile config "old-k8s-version-676314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 09:54:47.252287  456136 cli_runner.go:164] Run: docker container inspect old-k8s-version-676314 --format={{.State.Status}}
	I1025 09:54:47.270793  456136 host.go:66] Checking if "old-k8s-version-676314" exists ...
	I1025 09:54:47.271101  456136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:47.328020  456136 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:79 SystemTime:2025-10-25 09:54:47.317140118 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:47.328662  456136 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-676314 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 09:54:47.330558  456136 out.go:179] * Pausing node old-k8s-version-676314 ... 
	I1025 09:54:47.331784  456136 host.go:66] Checking if "old-k8s-version-676314" exists ...
	I1025 09:54:47.332036  456136 ssh_runner.go:195] Run: systemctl --version
	I1025 09:54:47.332072  456136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-676314
	I1025 09:54:47.349052  456136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33240 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/old-k8s-version-676314/id_rsa Username:docker}
	I1025 09:54:47.448746  456136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:54:47.474947  456136 pause.go:52] kubelet running: true
	I1025 09:54:47.475032  456136 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:54:47.643205  456136 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:54:47.643296  456136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:54:47.709784  456136 cri.go:89] found id: "13d9708c5d841b464c18aa3085829c1ab76dddd4a9cf55a722726920eebfa86f"
	I1025 09:54:47.709807  456136 cri.go:89] found id: "44e78a06fe8d3364412a49fe97c33eb05da0e0b00edd440ec10e521482e09243"
	I1025 09:54:47.709811  456136 cri.go:89] found id: "54bf43bd1d263e36fbfe11af76068cfa27fe7fa93a9489c9da3f96cb570ea54f"
	I1025 09:54:47.709814  456136 cri.go:89] found id: "672fe80d5a9e8a660c7eeaa5838bb3818c5b279a10306e12acf11595a752ce55"
	I1025 09:54:47.709816  456136 cri.go:89] found id: "7e9c9db60e85d067f416ac5fcd2862f37a4db9681670c0ae9adf96066420d66d"
	I1025 09:54:47.709820  456136 cri.go:89] found id: "1bdedceab1946592ada2ecf0f626b7e132c6c022e02bd19d57ece6929d21893a"
	I1025 09:54:47.709822  456136 cri.go:89] found id: "208f766f9a2264a90389e0a3255784544b9fe39f037b5319e382c5f93fe9822c"
	I1025 09:54:47.709824  456136 cri.go:89] found id: "e2fb3d4360165e17cdfb3eb5777d2f68e824a705c64256daac2adcefb4d9af8b"
	I1025 09:54:47.709827  456136 cri.go:89] found id: "6d19999376dc5316d611262641b47285d726876bb53bf4a498c9ab5d06c8b371"
	I1025 09:54:47.709846  456136 cri.go:89] found id: "ce0180eba192d9b1f16b7605a2d136a4e464e1a0ac44966da13f989f9f83875a"
	I1025 09:54:47.709849  456136 cri.go:89] found id: "32df4b3331837228098cd723b7cc594d68d17397153f50ef57dfee6afc0cfab0"
	I1025 09:54:47.709852  456136 cri.go:89] found id: ""
	I1025 09:54:47.709891  456136 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:54:47.721501  456136 retry.go:31] will retry after 239.331389ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:54:47Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:54:47.962012  456136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:54:47.975151  456136 pause.go:52] kubelet running: false
	I1025 09:54:47.975199  456136 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:54:48.112764  456136 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:54:48.112859  456136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:54:48.177480  456136 cri.go:89] found id: "13d9708c5d841b464c18aa3085829c1ab76dddd4a9cf55a722726920eebfa86f"
	I1025 09:54:48.177501  456136 cri.go:89] found id: "44e78a06fe8d3364412a49fe97c33eb05da0e0b00edd440ec10e521482e09243"
	I1025 09:54:48.177504  456136 cri.go:89] found id: "54bf43bd1d263e36fbfe11af76068cfa27fe7fa93a9489c9da3f96cb570ea54f"
	I1025 09:54:48.177507  456136 cri.go:89] found id: "672fe80d5a9e8a660c7eeaa5838bb3818c5b279a10306e12acf11595a752ce55"
	I1025 09:54:48.177510  456136 cri.go:89] found id: "7e9c9db60e85d067f416ac5fcd2862f37a4db9681670c0ae9adf96066420d66d"
	I1025 09:54:48.177513  456136 cri.go:89] found id: "1bdedceab1946592ada2ecf0f626b7e132c6c022e02bd19d57ece6929d21893a"
	I1025 09:54:48.177515  456136 cri.go:89] found id: "208f766f9a2264a90389e0a3255784544b9fe39f037b5319e382c5f93fe9822c"
	I1025 09:54:48.177530  456136 cri.go:89] found id: "e2fb3d4360165e17cdfb3eb5777d2f68e824a705c64256daac2adcefb4d9af8b"
	I1025 09:54:48.177533  456136 cri.go:89] found id: "6d19999376dc5316d611262641b47285d726876bb53bf4a498c9ab5d06c8b371"
	I1025 09:54:48.177539  456136 cri.go:89] found id: "ce0180eba192d9b1f16b7605a2d136a4e464e1a0ac44966da13f989f9f83875a"
	I1025 09:54:48.177542  456136 cri.go:89] found id: "32df4b3331837228098cd723b7cc594d68d17397153f50ef57dfee6afc0cfab0"
	I1025 09:54:48.177544  456136 cri.go:89] found id: ""
	I1025 09:54:48.177582  456136 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:54:48.189169  456136 retry.go:31] will retry after 289.70189ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:54:48Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:54:48.479486  456136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:54:48.492603  456136 pause.go:52] kubelet running: false
	I1025 09:54:48.492670  456136 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:54:48.638769  456136 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:54:48.638848  456136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:54:48.704829  456136 cri.go:89] found id: "13d9708c5d841b464c18aa3085829c1ab76dddd4a9cf55a722726920eebfa86f"
	I1025 09:54:48.704864  456136 cri.go:89] found id: "44e78a06fe8d3364412a49fe97c33eb05da0e0b00edd440ec10e521482e09243"
	I1025 09:54:48.704870  456136 cri.go:89] found id: "54bf43bd1d263e36fbfe11af76068cfa27fe7fa93a9489c9da3f96cb570ea54f"
	I1025 09:54:48.704875  456136 cri.go:89] found id: "672fe80d5a9e8a660c7eeaa5838bb3818c5b279a10306e12acf11595a752ce55"
	I1025 09:54:48.704880  456136 cri.go:89] found id: "7e9c9db60e85d067f416ac5fcd2862f37a4db9681670c0ae9adf96066420d66d"
	I1025 09:54:48.704885  456136 cri.go:89] found id: "1bdedceab1946592ada2ecf0f626b7e132c6c022e02bd19d57ece6929d21893a"
	I1025 09:54:48.704887  456136 cri.go:89] found id: "208f766f9a2264a90389e0a3255784544b9fe39f037b5319e382c5f93fe9822c"
	I1025 09:54:48.704890  456136 cri.go:89] found id: "e2fb3d4360165e17cdfb3eb5777d2f68e824a705c64256daac2adcefb4d9af8b"
	I1025 09:54:48.704892  456136 cri.go:89] found id: "6d19999376dc5316d611262641b47285d726876bb53bf4a498c9ab5d06c8b371"
	I1025 09:54:48.704906  456136 cri.go:89] found id: "ce0180eba192d9b1f16b7605a2d136a4e464e1a0ac44966da13f989f9f83875a"
	I1025 09:54:48.704909  456136 cri.go:89] found id: "32df4b3331837228098cd723b7cc594d68d17397153f50ef57dfee6afc0cfab0"
	I1025 09:54:48.704911  456136 cri.go:89] found id: ""
	I1025 09:54:48.704948  456136 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:54:48.716971  456136 retry.go:31] will retry after 339.391016ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:54:48Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:54:49.056509  456136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:54:49.069559  456136 pause.go:52] kubelet running: false
	I1025 09:54:49.069633  456136 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:54:49.211546  456136 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:54:49.211610  456136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:54:49.277639  456136 cri.go:89] found id: "13d9708c5d841b464c18aa3085829c1ab76dddd4a9cf55a722726920eebfa86f"
	I1025 09:54:49.277667  456136 cri.go:89] found id: "44e78a06fe8d3364412a49fe97c33eb05da0e0b00edd440ec10e521482e09243"
	I1025 09:54:49.277673  456136 cri.go:89] found id: "54bf43bd1d263e36fbfe11af76068cfa27fe7fa93a9489c9da3f96cb570ea54f"
	I1025 09:54:49.277678  456136 cri.go:89] found id: "672fe80d5a9e8a660c7eeaa5838bb3818c5b279a10306e12acf11595a752ce55"
	I1025 09:54:49.277681  456136 cri.go:89] found id: "7e9c9db60e85d067f416ac5fcd2862f37a4db9681670c0ae9adf96066420d66d"
	I1025 09:54:49.277687  456136 cri.go:89] found id: "1bdedceab1946592ada2ecf0f626b7e132c6c022e02bd19d57ece6929d21893a"
	I1025 09:54:49.277691  456136 cri.go:89] found id: "208f766f9a2264a90389e0a3255784544b9fe39f037b5319e382c5f93fe9822c"
	I1025 09:54:49.277694  456136 cri.go:89] found id: "e2fb3d4360165e17cdfb3eb5777d2f68e824a705c64256daac2adcefb4d9af8b"
	I1025 09:54:49.277698  456136 cri.go:89] found id: "6d19999376dc5316d611262641b47285d726876bb53bf4a498c9ab5d06c8b371"
	I1025 09:54:49.277719  456136 cri.go:89] found id: "ce0180eba192d9b1f16b7605a2d136a4e464e1a0ac44966da13f989f9f83875a"
	I1025 09:54:49.277724  456136 cri.go:89] found id: "32df4b3331837228098cd723b7cc594d68d17397153f50ef57dfee6afc0cfab0"
	I1025 09:54:49.277729  456136 cri.go:89] found id: ""
	I1025 09:54:49.277793  456136 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:54:49.291411  456136 out.go:203] 
	W1025 09:54:49.292515  456136 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:54:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:54:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:54:49.292539  456136 out.go:285] * 
	* 
	W1025 09:54:49.296486  456136 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:54:49.297763  456136 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-676314 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-676314
helpers_test.go:243: (dbg) docker inspect old-k8s-version-676314:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7",
	        "Created": "2025-10-25T09:52:30.302289758Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 442093,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:53:45.622107147Z",
	            "FinishedAt": "2025-10-25T09:53:43.51249455Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7/hosts",
	        "LogPath": "/var/lib/docker/containers/05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7/05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7-json.log",
	        "Name": "/old-k8s-version-676314",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-676314:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-676314",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7",
	                "LowerDir": "/var/lib/docker/overlay2/ee55f66edc956ba04d8a48ac2f58334c6be8a80c382de1ca530ee94ac23a8ce7-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ee55f66edc956ba04d8a48ac2f58334c6be8a80c382de1ca530ee94ac23a8ce7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ee55f66edc956ba04d8a48ac2f58334c6be8a80c382de1ca530ee94ac23a8ce7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ee55f66edc956ba04d8a48ac2f58334c6be8a80c382de1ca530ee94ac23a8ce7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-676314",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-676314/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-676314",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-676314",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-676314",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9dcc2b233f31b8e3ca5ec197e2e4c82058e4362ca9082e4e54f9bb21d047d45d",
	            "SandboxKey": "/var/run/docker/netns/9dcc2b233f31",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33240"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33241"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33244"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33242"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33243"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-676314": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:57:0b:95:cd:43",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f66217c06b76e94123bb60007cf891525ec1407362c18c5530791b0803181dbc",
	                    "EndpointID": "2582cb626d770d64ca1bb88d8e662d60dd3abb5ffe12c0ebe4b8f8af33a141ed",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-676314",
	                        "05255cf7a9be"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-676314 -n old-k8s-version-676314
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-676314 -n old-k8s-version-676314: exit status 2 (329.286476ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-676314 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-676314 logs -n 25: (1.179112852s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p old-k8s-version-676314 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ image   │ newest-cni-042675 image list --format=json                                                                                                                                                                                                    │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ pause   │ -p newest-cni-042675 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ delete  │ -p newest-cni-042675                                                                                                                                                                                                                          │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p no-preload-656799 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ delete  │ -p newest-cni-042675                                                                                                                                                                                                                          │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ delete  │ -p disable-driver-mounts-001549                                                                                                                                                                                                               │ disable-driver-mounts-001549 │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p embed-certs-846915 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ stop    │ -p no-preload-656799 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-676314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p old-k8s-version-676314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-880773 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-656799 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p no-preload-656799 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ stop    │ -p default-k8s-diff-port-880773 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-880773 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ start   │ -p default-k8s-diff-port-880773 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-846915 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ stop    │ -p embed-certs-846915 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ image   │ no-preload-656799 image list --format=json                                                                                                                                                                                                    │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ pause   │ -p no-preload-656799 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ delete  │ -p no-preload-656799                                                                                                                                                                                                                          │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ delete  │ -p no-preload-656799                                                                                                                                                                                                                          │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ image   │ old-k8s-version-676314 image list --format=json                                                                                                                                                                                               │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ pause   │ -p old-k8s-version-676314 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:54:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:54:19.275788  449952 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:54:19.275916  449952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:19.275925  449952 out.go:374] Setting ErrFile to fd 2...
	I1025 09:54:19.275930  449952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:19.276131  449952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:54:19.276587  449952 out.go:368] Setting JSON to false
	I1025 09:54:19.278081  449952 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5803,"bootTime":1761380256,"procs":397,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:54:19.278181  449952 start.go:141] virtualization: kvm guest
	I1025 09:54:19.280051  449952 out.go:179] * [default-k8s-diff-port-880773] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:54:19.281403  449952 notify.go:220] Checking for updates...
	I1025 09:54:19.281428  449952 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:54:19.282722  449952 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:54:19.283928  449952 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:19.285222  449952 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:54:19.286379  449952 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:54:19.287745  449952 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:54:19.289294  449952 config.go:182] Loaded profile config "default-k8s-diff-port-880773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:19.289852  449952 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:54:19.314779  449952 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:54:19.314881  449952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:19.376455  449952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-25 09:54:19.36493292 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:19.376554  449952 docker.go:318] overlay module found
	I1025 09:54:19.377788  449952 out.go:179] * Using the docker driver based on existing profile
	I1025 09:54:19.378682  449952 start.go:305] selected driver: docker
	I1025 09:54:19.378698  449952 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-880773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:19.378796  449952 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:54:19.379365  449952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:19.439139  449952 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-25 09:54:19.42844643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:19.439456  449952 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:54:19.439486  449952 cni.go:84] Creating CNI manager for ""
	I1025 09:54:19.439535  449952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:54:19.439596  449952 start.go:349] cluster config:
	{Name:default-k8s-diff-port-880773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:19.441502  449952 out.go:179] * Starting "default-k8s-diff-port-880773" primary control-plane node in "default-k8s-diff-port-880773" cluster
	I1025 09:54:19.442631  449952 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:54:19.443961  449952 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:54:19.445195  449952 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:54:19.445250  449952 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:54:19.445263  449952 cache.go:58] Caching tarball of preloaded images
	I1025 09:54:19.445295  449952 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:54:19.445383  449952 preload.go:233] Found /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:54:19.445399  449952 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:54:19.445551  449952 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/config.json ...
	I1025 09:54:19.469540  449952 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:54:19.469567  449952 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:54:19.469589  449952 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:54:19.469624  449952 start.go:360] acquireMachinesLock for default-k8s-diff-port-880773: {Name:mk083ef9abd9d3dbc7e696ddb5b045b01f4c2bf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:54:19.469696  449952 start.go:364] duration metric: took 50.424µs to acquireMachinesLock for "default-k8s-diff-port-880773"
	I1025 09:54:19.469720  449952 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:54:19.469728  449952 fix.go:54] fixHost starting: 
	I1025 09:54:19.470052  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:19.492315  449952 fix.go:112] recreateIfNeeded on default-k8s-diff-port-880773: state=Stopped err=<nil>
	W1025 09:54:19.492399  449952 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 09:54:15.475986  440020 node_ready.go:57] node "embed-certs-846915" has "Ready":"False" status (will retry)
	I1025 09:54:17.476904  440020 node_ready.go:49] node "embed-certs-846915" is "Ready"
	I1025 09:54:17.476939  440020 node_ready.go:38] duration metric: took 11.003723459s for node "embed-certs-846915" to be "Ready" ...
	I1025 09:54:17.476955  440020 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:54:17.477016  440020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:54:17.489612  440020 api_server.go:72] duration metric: took 11.446400559s to wait for apiserver process to appear ...
	I1025 09:54:17.489645  440020 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:54:17.489664  440020 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:54:17.495599  440020 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1025 09:54:17.496792  440020 api_server.go:141] control plane version: v1.34.1
	I1025 09:54:17.496826  440020 api_server.go:131] duration metric: took 7.172976ms to wait for apiserver health ...
	I1025 09:54:17.496835  440020 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:54:17.500516  440020 system_pods.go:59] 8 kube-system pods found
	I1025 09:54:17.500592  440020 system_pods.go:61] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:54:17.500600  440020 system_pods.go:61] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running
	I1025 09:54:17.500610  440020 system_pods.go:61] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:54:17.500613  440020 system_pods.go:61] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running
	I1025 09:54:17.500617  440020 system_pods.go:61] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running
	I1025 09:54:17.500620  440020 system_pods.go:61] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:54:17.500623  440020 system_pods.go:61] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running
	I1025 09:54:17.500627  440020 system_pods.go:61] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:54:17.500643  440020 system_pods.go:74] duration metric: took 3.795746ms to wait for pod list to return data ...
	I1025 09:54:17.500654  440020 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:54:17.503287  440020 default_sa.go:45] found service account: "default"
	I1025 09:54:17.503309  440020 default_sa.go:55] duration metric: took 2.649102ms for default service account to be created ...
	I1025 09:54:17.503319  440020 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:54:17.506326  440020 system_pods.go:86] 8 kube-system pods found
	I1025 09:54:17.506368  440020 system_pods.go:89] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:54:17.506374  440020 system_pods.go:89] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running
	I1025 09:54:17.506380  440020 system_pods.go:89] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:54:17.506390  440020 system_pods.go:89] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running
	I1025 09:54:17.506397  440020 system_pods.go:89] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running
	I1025 09:54:17.506400  440020 system_pods.go:89] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:54:17.506405  440020 system_pods.go:89] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running
	I1025 09:54:17.506410  440020 system_pods.go:89] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:54:17.506433  440020 retry.go:31] will retry after 188.876759ms: missing components: kube-dns
	I1025 09:54:17.700456  440020 system_pods.go:86] 8 kube-system pods found
	I1025 09:54:17.700546  440020 system_pods.go:89] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:54:17.700558  440020 system_pods.go:89] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running
	I1025 09:54:17.700568  440020 system_pods.go:89] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:54:17.700582  440020 system_pods.go:89] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running
	I1025 09:54:17.700588  440020 system_pods.go:89] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running
	I1025 09:54:17.700593  440020 system_pods.go:89] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:54:17.700599  440020 system_pods.go:89] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running
	I1025 09:54:17.700612  440020 system_pods.go:89] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:54:17.700632  440020 retry.go:31] will retry after 250.335068ms: missing components: kube-dns
	I1025 09:54:17.955256  440020 system_pods.go:86] 8 kube-system pods found
	I1025 09:54:17.955289  440020 system_pods.go:89] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Running
	I1025 09:54:17.955295  440020 system_pods.go:89] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running
	I1025 09:54:17.955298  440020 system_pods.go:89] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:54:17.955302  440020 system_pods.go:89] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running
	I1025 09:54:17.955307  440020 system_pods.go:89] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running
	I1025 09:54:17.955311  440020 system_pods.go:89] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:54:17.955314  440020 system_pods.go:89] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running
	I1025 09:54:17.955317  440020 system_pods.go:89] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Running
	I1025 09:54:17.955324  440020 system_pods.go:126] duration metric: took 451.999845ms to wait for k8s-apps to be running ...
	I1025 09:54:17.955332  440020 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:54:17.955420  440020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:54:17.970053  440020 system_svc.go:56] duration metric: took 14.706919ms WaitForService to wait for kubelet
	I1025 09:54:17.970086  440020 kubeadm.go:586] duration metric: took 11.926881356s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:54:17.970111  440020 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:54:17.973494  440020 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:54:17.973526  440020 node_conditions.go:123] node cpu capacity is 8
	I1025 09:54:17.973543  440020 node_conditions.go:105] duration metric: took 3.426431ms to run NodePressure ...
	I1025 09:54:17.973558  440020 start.go:241] waiting for startup goroutines ...
	I1025 09:54:17.973567  440020 start.go:246] waiting for cluster config update ...
	I1025 09:54:17.973582  440020 start.go:255] writing updated cluster config ...
	I1025 09:54:17.973852  440020 ssh_runner.go:195] Run: rm -f paused
	I1025 09:54:17.978265  440020 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:54:17.982758  440020 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4w68k" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:17.987122  440020 pod_ready.go:94] pod "coredns-66bc5c9577-4w68k" is "Ready"
	I1025 09:54:17.987148  440020 pod_ready.go:86] duration metric: took 4.365303ms for pod "coredns-66bc5c9577-4w68k" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:17.989310  440020 pod_ready.go:83] waiting for pod "etcd-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:17.993594  440020 pod_ready.go:94] pod "etcd-embed-certs-846915" is "Ready"
	I1025 09:54:17.993619  440020 pod_ready.go:86] duration metric: took 4.284136ms for pod "etcd-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:17.995810  440020 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:17.999546  440020 pod_ready.go:94] pod "kube-apiserver-embed-certs-846915" is "Ready"
	I1025 09:54:17.999606  440020 pod_ready.go:86] duration metric: took 3.774304ms for pod "kube-apiserver-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:18.001621  440020 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:18.384665  440020 pod_ready.go:94] pod "kube-controller-manager-embed-certs-846915" is "Ready"
	I1025 09:54:18.384701  440020 pod_ready.go:86] duration metric: took 383.060784ms for pod "kube-controller-manager-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:18.583914  440020 pod_ready.go:83] waiting for pod "kube-proxy-kfqqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:18.982945  440020 pod_ready.go:94] pod "kube-proxy-kfqqh" is "Ready"
	I1025 09:54:18.982973  440020 pod_ready.go:86] duration metric: took 399.034255ms for pod "kube-proxy-kfqqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:19.184109  440020 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:19.584000  440020 pod_ready.go:94] pod "kube-scheduler-embed-certs-846915" is "Ready"
	I1025 09:54:19.584035  440020 pod_ready.go:86] duration metric: took 399.892029ms for pod "kube-scheduler-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:19.584051  440020 pod_ready.go:40] duration metric: took 1.605758265s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:54:19.650747  440020 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:54:19.652803  440020 out.go:179] * Done! kubectl is now configured to use "embed-certs-846915" cluster and "default" namespace by default
	W1025 09:54:16.068318  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	W1025 09:54:18.567974  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	I1025 09:54:18.301621  445741 pod_ready.go:94] pod "coredns-66bc5c9577-sw9hv" is "Ready"
	I1025 09:54:18.301648  445741 pod_ready.go:86] duration metric: took 9.506322482s for pod "coredns-66bc5c9577-sw9hv" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:18.304547  445741 pod_ready.go:83] waiting for pod "etcd-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:54:20.312171  445741 pod_ready.go:104] pod "etcd-no-preload-656799" is not "Ready", error: <nil>
	I1025 09:54:21.809723  445741 pod_ready.go:94] pod "etcd-no-preload-656799" is "Ready"
	I1025 09:54:21.809749  445741 pod_ready.go:86] duration metric: took 3.505178884s for pod "etcd-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:21.812231  445741 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:21.816695  445741 pod_ready.go:94] pod "kube-apiserver-no-preload-656799" is "Ready"
	I1025 09:54:21.816722  445741 pod_ready.go:86] duration metric: took 4.466508ms for pod "kube-apiserver-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:21.819011  445741 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:21.823589  445741 pod_ready.go:94] pod "kube-controller-manager-no-preload-656799" is "Ready"
	I1025 09:54:21.823628  445741 pod_ready.go:86] duration metric: took 4.593239ms for pod "kube-controller-manager-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:21.825939  445741 pod_ready.go:83] waiting for pod "kube-proxy-gfph2" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:22.010836  445741 pod_ready.go:94] pod "kube-proxy-gfph2" is "Ready"
	I1025 09:54:22.010862  445741 pod_ready.go:86] duration metric: took 184.902324ms for pod "kube-proxy-gfph2" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:22.210739  445741 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:22.608665  445741 pod_ready.go:94] pod "kube-scheduler-no-preload-656799" is "Ready"
	I1025 09:54:22.608695  445741 pod_ready.go:86] duration metric: took 397.92747ms for pod "kube-scheduler-no-preload-656799" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:22.608710  445741 pod_ready.go:40] duration metric: took 13.818887723s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:54:22.670288  445741 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:54:22.672465  445741 out.go:179] * Done! kubectl is now configured to use "no-preload-656799" cluster and "default" namespace by default
	I1025 09:54:19.494507  449952 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-880773" ...
	I1025 09:54:19.494587  449952 cli_runner.go:164] Run: docker start default-k8s-diff-port-880773
	I1025 09:54:19.824726  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:19.851116  449952 kic.go:430] container "default-k8s-diff-port-880773" state is running.
	I1025 09:54:19.851830  449952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-880773
	I1025 09:54:19.874663  449952 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/config.json ...
	I1025 09:54:19.874958  449952 machine.go:93] provisionDockerMachine start ...
	I1025 09:54:19.875036  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:19.900142  449952 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:19.900490  449952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33250 <nil> <nil>}
	I1025 09:54:19.900509  449952 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:54:19.901160  449952 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54890->127.0.0.1:33250: read: connection reset by peer
	I1025 09:54:23.064068  449952 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-880773
	
	I1025 09:54:23.064110  449952 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-880773"
	I1025 09:54:23.064192  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:23.086772  449952 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:23.087065  449952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33250 <nil> <nil>}
	I1025 09:54:23.087087  449952 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-880773 && echo "default-k8s-diff-port-880773" | sudo tee /etc/hostname
	I1025 09:54:23.252426  449952 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-880773
	
	I1025 09:54:23.252521  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:23.273044  449952 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:23.273316  449952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33250 <nil> <nil>}
	I1025 09:54:23.273335  449952 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-880773' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-880773/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-880773' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:54:23.424572  449952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:54:23.424603  449952 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 09:54:23.424629  449952 ubuntu.go:190] setting up certificates
	I1025 09:54:23.424642  449952 provision.go:84] configureAuth start
	I1025 09:54:23.424716  449952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-880773
	I1025 09:54:23.447850  449952 provision.go:143] copyHostCerts
	I1025 09:54:23.447922  449952 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem, removing ...
	I1025 09:54:23.447939  449952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:54:23.448010  449952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 09:54:23.448121  449952 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem, removing ...
	I1025 09:54:23.448133  449952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:54:23.448172  449952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 09:54:23.448307  449952 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem, removing ...
	I1025 09:54:23.448322  449952 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:54:23.448386  449952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 09:54:23.448466  449952 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-880773 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-880773 localhost minikube]
	I1025 09:54:23.670392  449952 provision.go:177] copyRemoteCerts
	I1025 09:54:23.670473  449952 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:54:23.670534  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:23.695861  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:23.810003  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:54:23.831919  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1025 09:54:23.855020  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:54:23.876651  449952 provision.go:87] duration metric: took 451.986685ms to configureAuth
	I1025 09:54:23.876682  449952 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:54:23.876901  449952 config.go:182] Loaded profile config "default-k8s-diff-port-880773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:23.877015  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:23.898381  449952 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:23.898653  449952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33250 <nil> <nil>}
	I1025 09:54:23.898684  449952 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1025 09:54:20.568510  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	W1025 09:54:22.569444  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	W1025 09:54:25.068911  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	I1025 09:54:24.748214  449952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:54:24.748254  449952 machine.go:96] duration metric: took 4.873275374s to provisionDockerMachine
	I1025 09:54:24.748278  449952 start.go:293] postStartSetup for "default-k8s-diff-port-880773" (driver="docker")
	I1025 09:54:24.748293  449952 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:54:24.748387  449952 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:54:24.748520  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:24.768940  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:24.873795  449952 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:54:24.877543  449952 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:54:24.877575  449952 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:54:24.877589  449952 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 09:54:24.877661  449952 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 09:54:24.877782  449952 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> 1341452.pem in /etc/ssl/certs
	I1025 09:54:24.877958  449952 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:54:24.887735  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:54:24.906567  449952 start.go:296] duration metric: took 158.269737ms for postStartSetup
	I1025 09:54:24.906638  449952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:54:24.906671  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:24.925060  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:25.024684  449952 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:54:25.029312  449952 fix.go:56] duration metric: took 5.559580439s for fixHost
	I1025 09:54:25.029335  449952 start.go:83] releasing machines lock for "default-k8s-diff-port-880773", held for 5.559626356s
	I1025 09:54:25.029412  449952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-880773
	I1025 09:54:25.053651  449952 ssh_runner.go:195] Run: cat /version.json
	I1025 09:54:25.053671  449952 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:54:25.053710  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:25.053740  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:25.076792  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:25.077574  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:25.177839  449952 ssh_runner.go:195] Run: systemctl --version
	I1025 09:54:25.232420  449952 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:54:25.269857  449952 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:54:25.274931  449952 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:54:25.275022  449952 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:54:25.283809  449952 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:54:25.283844  449952 start.go:495] detecting cgroup driver to use...
	I1025 09:54:25.283873  449952 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:54:25.283907  449952 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:54:25.298715  449952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:54:25.311114  449952 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:54:25.311179  449952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:54:25.326245  449952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:54:25.338983  449952 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:54:25.421886  449952 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:54:25.507785  449952 docker.go:234] disabling docker service ...
	I1025 09:54:25.507851  449952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:54:25.522758  449952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:54:25.535545  449952 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:54:25.624987  449952 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:54:25.708591  449952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:54:25.721462  449952 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:54:25.736203  449952 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:54:25.736286  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.745513  449952 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:54:25.745572  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.754426  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.763537  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.772424  449952 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:54:25.780767  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.789663  449952 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.798468  449952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:25.807406  449952 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:54:25.815004  449952 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:54:25.822998  449952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:25.903676  449952 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:54:26.020906  449952 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:54:26.020973  449952 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:54:26.025150  449952 start.go:563] Will wait 60s for crictl version
	I1025 09:54:26.025208  449952 ssh_runner.go:195] Run: which crictl
	I1025 09:54:26.029013  449952 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:54:26.057753  449952 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:54:26.057819  449952 ssh_runner.go:195] Run: crio --version
	I1025 09:54:26.086687  449952 ssh_runner.go:195] Run: crio --version
	I1025 09:54:26.116337  449952 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:54:26.117443  449952 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-880773 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:54:26.135714  449952 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1025 09:54:26.140427  449952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:54:26.154403  449952 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-880773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:54:26.154570  449952 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:54:26.154635  449952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:54:26.192928  449952 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:54:26.192961  449952 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:54:26.193024  449952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:54:26.221578  449952 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:54:26.221602  449952 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:54:26.221611  449952 kubeadm.go:934] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1025 09:54:26.221708  449952 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-880773 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:54:26.221767  449952 ssh_runner.go:195] Run: crio config
	I1025 09:54:26.266519  449952 cni.go:84] Creating CNI manager for ""
	I1025 09:54:26.266551  449952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:54:26.266577  449952 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:54:26.266705  449952 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-880773 NodeName:default-k8s-diff-port-880773 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:54:26.266942  449952 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-880773"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:54:26.267030  449952 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:54:26.276099  449952 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:54:26.276158  449952 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:54:26.283856  449952 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1025 09:54:26.296736  449952 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:54:26.309600  449952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1025 09:54:26.322267  449952 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:54:26.325950  449952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:54:26.336085  449952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:26.418603  449952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:54:26.445329  449952 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773 for IP: 192.168.94.2
	I1025 09:54:26.445370  449952 certs.go:195] generating shared ca certs ...
	I1025 09:54:26.445391  449952 certs.go:227] acquiring lock for ca certs: {Name:mk84f00dc0ba6e3a6eb84ff47b0ea60692217fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:26.445589  449952 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key
	I1025 09:54:26.445651  449952 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key
	I1025 09:54:26.445663  449952 certs.go:257] generating profile certs ...
	I1025 09:54:26.445763  449952 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/client.key
	I1025 09:54:26.445836  449952 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.key.bf049977
	I1025 09:54:26.445889  449952 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/proxy-client.key
	I1025 09:54:26.446021  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem (1338 bytes)
	W1025 09:54:26.446059  449952 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145_empty.pem, impossibly tiny 0 bytes
	I1025 09:54:26.446071  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:54:26.446100  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:54:26.446130  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:54:26.446159  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem (1675 bytes)
	I1025 09:54:26.446208  449952 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:54:26.447082  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:54:26.467801  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:54:26.487512  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:54:26.507419  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:54:26.531864  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 09:54:26.550342  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:54:26.569273  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:54:26.587593  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/default-k8s-diff-port-880773/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:54:26.605286  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /usr/share/ca-certificates/1341452.pem (1708 bytes)
	I1025 09:54:26.623801  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:54:26.642803  449952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem --> /usr/share/ca-certificates/134145.pem (1338 bytes)
	I1025 09:54:26.660752  449952 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:54:26.674006  449952 ssh_runner.go:195] Run: openssl version
	I1025 09:54:26.680368  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:54:26.689226  449952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:26.693134  449952 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:59 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:26.693180  449952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:26.728010  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:54:26.736810  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134145.pem && ln -fs /usr/share/ca-certificates/134145.pem /etc/ssl/certs/134145.pem"
	I1025 09:54:26.746043  449952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134145.pem
	I1025 09:54:26.749893  449952 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:05 /usr/share/ca-certificates/134145.pem
	I1025 09:54:26.749943  449952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134145.pem
	I1025 09:54:26.785153  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134145.pem /etc/ssl/certs/51391683.0"
	I1025 09:54:26.794063  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1341452.pem && ln -fs /usr/share/ca-certificates/1341452.pem /etc/ssl/certs/1341452.pem"
	I1025 09:54:26.802929  449952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1341452.pem
	I1025 09:54:26.807038  449952 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:05 /usr/share/ca-certificates/1341452.pem
	I1025 09:54:26.807101  449952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1341452.pem
	I1025 09:54:26.844046  449952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1341452.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:54:26.852738  449952 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:54:26.856516  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:54:26.892058  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:54:26.928987  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:54:26.978149  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:54:27.021912  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:54:27.075255  449952 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:54:27.132302  449952 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-880773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-880773 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:27.132461  449952 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:54:27.132541  449952 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:54:27.166099  449952 cri.go:89] found id: "8a40c304121945c99334f375a4fc8f1073390b82cca6a44c6e2b224a5804ed43"
	I1025 09:54:27.166122  449952 cri.go:89] found id: "1099e940dc59e4a7fc6edf4f82c427fc4633cbc73d1759f0ef430fccd002219f"
	I1025 09:54:27.166131  449952 cri.go:89] found id: "b7360eb6624b8284557553c607130a8087e3690512dcc9caea4351f9f876fd02"
	I1025 09:54:27.166136  449952 cri.go:89] found id: "9a7e2aef555d4452a0b73ff6d39e556aaf40affe43c7adcaf8fc119b3910c298"
	I1025 09:54:27.166141  449952 cri.go:89] found id: ""
	I1025 09:54:27.166194  449952 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:54:27.179061  449952 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:54:27Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:54:27.179160  449952 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:54:27.188157  449952 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:54:27.188180  449952 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:54:27.188228  449952 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:54:27.196153  449952 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:54:27.197499  449952 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-880773" does not appear in /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:27.198480  449952 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-130604/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-880773" cluster setting kubeconfig missing "default-k8s-diff-port-880773" context setting]
	I1025 09:54:27.199935  449952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:27.202256  449952 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:54:27.210782  449952 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1025 09:54:27.210819  449952 kubeadm.go:601] duration metric: took 22.632727ms to restartPrimaryControlPlane
	I1025 09:54:27.210865  449952 kubeadm.go:402] duration metric: took 78.655845ms to StartCluster
	I1025 09:54:27.210883  449952 settings.go:142] acquiring lock: {Name:mke1e64be0ec6edf2eef6e52eb10d83b59bb8c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:27.210942  449952 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:27.213436  449952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:27.213678  449952 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:54:27.213737  449952 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:54:27.213844  449952 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-880773"
	I1025 09:54:27.213859  449952 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-880773"
	I1025 09:54:27.213875  449952 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-880773"
	I1025 09:54:27.213886  449952 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-880773"
	I1025 09:54:27.213891  449952 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-880773"
	W1025 09:54:27.213898  449952 addons.go:247] addon dashboard should already be in state true
	I1025 09:54:27.213936  449952 host.go:66] Checking if "default-k8s-diff-port-880773" exists ...
	I1025 09:54:27.213939  449952 config.go:182] Loaded profile config "default-k8s-diff-port-880773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:27.213866  449952 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-880773"
	W1025 09:54:27.214066  449952 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:54:27.214095  449952 host.go:66] Checking if "default-k8s-diff-port-880773" exists ...
	I1025 09:54:27.214261  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:27.214456  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:27.214610  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:27.216018  449952 out.go:179] * Verifying Kubernetes components...
	I1025 09:54:27.217234  449952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:27.239708  449952 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-880773"
	W1025 09:54:27.239738  449952 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:54:27.239770  449952 host.go:66] Checking if "default-k8s-diff-port-880773" exists ...
	I1025 09:54:27.240253  449952 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:54:27.242481  449952 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 09:54:27.242489  449952 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:54:27.243627  449952 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:54:27.243645  449952 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 09:54:27.243651  449952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:54:27.243712  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:27.247468  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 09:54:27.247486  449952 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 09:54:27.247539  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:27.267591  449952 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:54:27.267622  449952 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:54:27.267686  449952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:54:27.276575  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:27.285081  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:27.298498  449952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:54:27.368890  449952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:54:27.383755  449952 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-880773" to be "Ready" ...
	I1025 09:54:27.395977  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 09:54:27.396003  449952 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 09:54:27.406130  449952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:54:27.411552  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 09:54:27.411662  449952 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 09:54:27.419928  449952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:54:27.427159  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 09:54:27.427182  449952 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 09:54:27.446072  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 09:54:27.446100  449952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 09:54:27.471003  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 09:54:27.471033  449952 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 09:54:27.488999  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 09:54:27.489025  449952 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 09:54:27.503088  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 09:54:27.503113  449952 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 09:54:27.517184  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 09:54:27.517212  449952 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 09:54:27.530517  449952 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:54:27.530540  449952 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 09:54:27.545962  449952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:54:29.018628  449952 node_ready.go:49] node "default-k8s-diff-port-880773" is "Ready"
	I1025 09:54:29.018668  449952 node_ready.go:38] duration metric: took 1.634880084s for node "default-k8s-diff-port-880773" to be "Ready" ...
	I1025 09:54:29.018686  449952 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:54:29.018740  449952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:54:29.506034  449952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.099869063s)
	I1025 09:54:29.506102  449952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.086134972s)
	I1025 09:54:29.506180  449952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.960181276s)
	I1025 09:54:29.506238  449952 api_server.go:72] duration metric: took 2.292529535s to wait for apiserver process to appear ...
	I1025 09:54:29.506289  449952 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:54:29.506306  449952 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1025 09:54:29.507716  449952 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-880773 addons enable metrics-server
	
	I1025 09:54:29.513028  449952 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:54:29.513055  449952 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:54:29.514792  449952 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1025 09:54:27.071249  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	W1025 09:54:29.568141  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	I1025 09:54:29.515891  449952 addons.go:514] duration metric: took 2.302163358s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1025 09:54:30.007035  449952 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1025 09:54:30.013495  449952 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:54:30.013618  449952 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:54:30.507293  449952 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1025 09:54:30.511406  449952 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1025 09:54:30.512375  449952 api_server.go:141] control plane version: v1.34.1
	I1025 09:54:30.512397  449952 api_server.go:131] duration metric: took 1.006101961s to wait for apiserver health ...
	I1025 09:54:30.512405  449952 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:54:30.515834  449952 system_pods.go:59] 8 kube-system pods found
	I1025 09:54:30.515887  449952 system_pods.go:61] "coredns-66bc5c9577-29ltg" [5d5247ec-619e-4bcb-82c5-1d5c0b42b685] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:54:30.515906  449952 system_pods.go:61] "etcd-default-k8s-diff-port-880773" [abe5a2b4-061a-47af-9c04-41b3261607b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:54:30.515928  449952 system_pods.go:61] "kindnet-cnqn8" [c804731f-754b-4ce1-9609-1a6fc8cf317c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 09:54:30.515939  449952 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-880773" [e8188321-7de4-49f4-97f9-e7aeca6d00db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:54:30.515950  449952 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-880773" [29ba481f-eea8-41cb-bbde-2551ae253f54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:54:30.515961  449952 system_pods.go:61] "kube-proxy-bg94v" [4b7ad6fe-03c3-41dd-9633-6ed6a648201f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 09:54:30.515973  449952 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-880773" [952c634f-45b2-401d-9a90-6d2123e839ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:54:30.515981  449952 system_pods.go:61] "storage-provisioner" [469fcc4c-281e-4595-aa3b-4ea853afb153] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:54:30.515998  449952 system_pods.go:74] duration metric: took 3.581249ms to wait for pod list to return data ...
	I1025 09:54:30.516008  449952 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:54:30.518119  449952 default_sa.go:45] found service account: "default"
	I1025 09:54:30.518138  449952 default_sa.go:55] duration metric: took 2.123947ms for default service account to be created ...
	I1025 09:54:30.518148  449952 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:54:30.520334  449952 system_pods.go:86] 8 kube-system pods found
	I1025 09:54:30.520372  449952 system_pods.go:89] "coredns-66bc5c9577-29ltg" [5d5247ec-619e-4bcb-82c5-1d5c0b42b685] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:54:30.520410  449952 system_pods.go:89] "etcd-default-k8s-diff-port-880773" [abe5a2b4-061a-47af-9c04-41b3261607b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:54:30.520421  449952 system_pods.go:89] "kindnet-cnqn8" [c804731f-754b-4ce1-9609-1a6fc8cf317c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 09:54:30.520430  449952 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-880773" [e8188321-7de4-49f4-97f9-e7aeca6d00db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:54:30.520439  449952 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-880773" [29ba481f-eea8-41cb-bbde-2551ae253f54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:54:30.520446  449952 system_pods.go:89] "kube-proxy-bg94v" [4b7ad6fe-03c3-41dd-9633-6ed6a648201f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 09:54:30.520452  449952 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-880773" [952c634f-45b2-401d-9a90-6d2123e839ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:54:30.520458  449952 system_pods.go:89] "storage-provisioner" [469fcc4c-281e-4595-aa3b-4ea853afb153] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:54:30.520464  449952 system_pods.go:126] duration metric: took 2.311292ms to wait for k8s-apps to be running ...
	I1025 09:54:30.520472  449952 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:54:30.520522  449952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:54:30.533459  449952 system_svc.go:56] duration metric: took 12.977874ms WaitForService to wait for kubelet
	I1025 09:54:30.533492  449952 kubeadm.go:586] duration metric: took 3.319782027s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:54:30.533514  449952 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:54:30.536489  449952 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:54:30.536518  449952 node_conditions.go:123] node cpu capacity is 8
	I1025 09:54:30.536536  449952 node_conditions.go:105] duration metric: took 3.015821ms to run NodePressure ...
	I1025 09:54:30.536552  449952 start.go:241] waiting for startup goroutines ...
	I1025 09:54:30.536565  449952 start.go:246] waiting for cluster config update ...
	I1025 09:54:30.536584  449952 start.go:255] writing updated cluster config ...
	I1025 09:54:30.536891  449952 ssh_runner.go:195] Run: rm -f paused
	I1025 09:54:30.540962  449952 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:54:30.544202  449952 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-29ltg" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:54:32.550284  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:54:32.069496  441651 pod_ready.go:104] pod "coredns-5dd5756b68-qffxt" is not "Ready", error: <nil>
	I1025 09:54:34.069854  441651 pod_ready.go:94] pod "coredns-5dd5756b68-qffxt" is "Ready"
	I1025 09:54:34.069885  441651 pod_ready.go:86] duration metric: took 37.507966247s for pod "coredns-5dd5756b68-qffxt" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.074076  441651 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.080201  441651 pod_ready.go:94] pod "etcd-old-k8s-version-676314" is "Ready"
	I1025 09:54:34.080244  441651 pod_ready.go:86] duration metric: took 6.136939ms for pod "etcd-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.084014  441651 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.089480  441651 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-676314" is "Ready"
	I1025 09:54:34.089513  441651 pod_ready.go:86] duration metric: took 5.467331ms for pod "kube-apiserver-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.092917  441651 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.266390  441651 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-676314" is "Ready"
	I1025 09:54:34.266419  441651 pod_ready.go:86] duration metric: took 173.473814ms for pod "kube-controller-manager-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.468281  441651 pod_ready.go:83] waiting for pod "kube-proxy-bsxx6" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:34.866765  441651 pod_ready.go:94] pod "kube-proxy-bsxx6" is "Ready"
	I1025 09:54:34.866794  441651 pod_ready.go:86] duration metric: took 398.483847ms for pod "kube-proxy-bsxx6" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:35.067296  441651 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:35.466580  441651 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-676314" is "Ready"
	I1025 09:54:35.466609  441651 pod_ready.go:86] duration metric: took 399.280578ms for pod "kube-scheduler-old-k8s-version-676314" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:54:35.466637  441651 pod_ready.go:40] duration metric: took 38.910774112s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:54:35.520724  441651 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1025 09:54:35.522478  441651 out.go:203] 
	W1025 09:54:35.525673  441651 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1025 09:54:35.527157  441651 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1025 09:54:35.528391  441651 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-676314" cluster and "default" namespace by default
	W1025 09:54:35.050266  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:54:37.050630  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:54:39.550397  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:54:41.550439  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:54:44.050143  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:54:46.550193  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:54:48.550524  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 25 09:54:13 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:13.61112898Z" level=info msg="Created container 32df4b3331837228098cd723b7cc594d68d17397153f50ef57dfee6afc0cfab0: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-q7z2s/kubernetes-dashboard" id=b657cbcb-dc88-4516-82f9-a3e5034f8c2c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:13 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:13.6117702Z" level=info msg="Starting container: 32df4b3331837228098cd723b7cc594d68d17397153f50ef57dfee6afc0cfab0" id=a54e9b9d-1cbc-4a4a-9ec4-c2126cb27cd1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:54:13 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:13.613457993Z" level=info msg="Started container" PID=1730 containerID=32df4b3331837228098cd723b7cc594d68d17397153f50ef57dfee6afc0cfab0 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-q7z2s/kubernetes-dashboard id=a54e9b9d-1cbc-4a4a-9ec4-c2126cb27cd1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bb54e83ad2874e7750eb8f07df38a8ae8c27732ad75b9d2d2104d20cf4e8e4cf
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.143071375Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=039a1649-67a7-46a9-acb1-25bc0d85a58f name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.144087774Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=30a79f9f-9fd0-4755-9d0e-3a40413f043a name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.145094958Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6a0b97c4-ef7c-4ad5-81e2-63cdebfe9c92 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.145226211Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.150607536Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.150812559Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/77c9c00d8445b5fa091e6c5df35c02c0c88caf68fc3b946e9e17aa71ac50b588/merged/etc/passwd: no such file or directory"
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.150846138Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/77c9c00d8445b5fa091e6c5df35c02c0c88caf68fc3b946e9e17aa71ac50b588/merged/etc/group: no such file or directory"
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.151113043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.175520388Z" level=info msg="Created container 13d9708c5d841b464c18aa3085829c1ab76dddd4a9cf55a722726920eebfa86f: kube-system/storage-provisioner/storage-provisioner" id=6a0b97c4-ef7c-4ad5-81e2-63cdebfe9c92 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.176149465Z" level=info msg="Starting container: 13d9708c5d841b464c18aa3085829c1ab76dddd4a9cf55a722726920eebfa86f" id=27e66b2f-b20a-4cff-ad49-7feee1437b1a name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.178433859Z" level=info msg="Started container" PID=1754 containerID=13d9708c5d841b464c18aa3085829c1ab76dddd4a9cf55a722726920eebfa86f description=kube-system/storage-provisioner/storage-provisioner id=27e66b2f-b20a-4cff-ad49-7feee1437b1a name=/runtime.v1.RuntimeService/StartContainer sandboxID=f08e6704603faeee60ac35ecfde218631d6f2a6cdfe7025ef302baa3b96a6549
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.027938818Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7f243b7b-ef0a-4b4f-b41f-66ccc7c0d2e1 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.028879049Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3d6ae517-a31c-4020-8c95-6c9640a9cf41 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.029955612Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q/dashboard-metrics-scraper" id=498cce77-c4f7-4e83-8163-a894a0cae621 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.030140146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.036816419Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.037453033Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.080338658Z" level=info msg="Created container ce0180eba192d9b1f16b7605a2d136a4e464e1a0ac44966da13f989f9f83875a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q/dashboard-metrics-scraper" id=498cce77-c4f7-4e83-8163-a894a0cae621 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.08097355Z" level=info msg="Starting container: ce0180eba192d9b1f16b7605a2d136a4e464e1a0ac44966da13f989f9f83875a" id=fd45845e-b408-46f1-98e8-641d2ac5ae37 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.082885211Z" level=info msg="Started container" PID=1770 containerID=ce0180eba192d9b1f16b7605a2d136a4e464e1a0ac44966da13f989f9f83875a description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q/dashboard-metrics-scraper id=fd45845e-b408-46f1-98e8-641d2ac5ae37 name=/runtime.v1.RuntimeService/StartContainer sandboxID=18f5ffd9a1d747f9af6b6eb16c606057fe36884904934d9d73210302d5071330
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.153074934Z" level=info msg="Removing container: af6cf5cd96943c39604558d408e004d24f7fbfb6e6df060d167579a1b7d42f98" id=17485b4d-f56d-43f2-b605-16a8e859ea71 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.163971835Z" level=info msg="Removed container af6cf5cd96943c39604558d408e004d24f7fbfb6e6df060d167579a1b7d42f98: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q/dashboard-metrics-scraper" id=17485b4d-f56d-43f2-b605-16a8e859ea71 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	ce0180eba192d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   18f5ffd9a1d74       dashboard-metrics-scraper-5f989dc9cf-gx27q       kubernetes-dashboard
	13d9708c5d841       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   f08e6704603fa       storage-provisioner                              kube-system
	32df4b3331837       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   36 seconds ago      Running             kubernetes-dashboard        0                   bb54e83ad2874       kubernetes-dashboard-8694d4445c-q7z2s            kubernetes-dashboard
	44e78a06fe8d3       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           54 seconds ago      Running             coredns                     0                   9ab60ce544b7c       coredns-5dd5756b68-qffxt                         kube-system
	9ef737de195e6       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   9238f643c231d       busybox                                          default
	54bf43bd1d263       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   1f9a5977383c1       kindnet-5hnxc                                    kube-system
	672fe80d5a9e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   f08e6704603fa       storage-provisioner                              kube-system
	7e9c9db60e85d       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           54 seconds ago      Running             kube-proxy                  0                   3a472f87f5070       kube-proxy-bsxx6                                 kube-system
	1bdedceab1946       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           57 seconds ago      Running             kube-apiserver              0                   9b882f0cfea3c       kube-apiserver-old-k8s-version-676314            kube-system
	208f766f9a226       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           57 seconds ago      Running             etcd                        0                   2dfb5118163a1       etcd-old-k8s-version-676314                      kube-system
	e2fb3d4360165       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           57 seconds ago      Running             kube-scheduler              0                   c99eda1634540       kube-scheduler-old-k8s-version-676314            kube-system
	6d19999376dc5       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           57 seconds ago      Running             kube-controller-manager     0                   64c8b58390c3d       kube-controller-manager-old-k8s-version-676314   kube-system
	
	
	==> coredns [44e78a06fe8d3364412a49fe97c33eb05da0e0b00edd440ec10e521482e09243] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47099 - 32185 "HINFO IN 1945696538429773668.5261800709378654143. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025319975s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-676314
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-676314
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=old-k8s-version-676314
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_52_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:52:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-676314
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:54:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:54:25 +0000   Sat, 25 Oct 2025 09:52:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:54:25 +0000   Sat, 25 Oct 2025 09:52:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:54:25 +0000   Sat, 25 Oct 2025 09:52:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:54:25 +0000   Sat, 25 Oct 2025 09:53:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-676314
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                91553f51-64a8-4128-a815-5ed176c5ea05
	  Boot ID:                    69cac88c-fbae-449a-9884-8eb99653f5b9
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-5dd5756b68-qffxt                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-old-k8s-version-676314                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m4s
	  kube-system                 kindnet-5hnxc                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-old-k8s-version-676314             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-676314    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-bsxx6                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-676314             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-gx27q        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-q7z2s             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m4s               kubelet          Node old-k8s-version-676314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s               kubelet          Node old-k8s-version-676314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s               kubelet          Node old-k8s-version-676314 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s               node-controller  Node old-k8s-version-676314 event: Registered Node old-k8s-version-676314 in Controller
	  Normal  NodeReady                98s                kubelet          Node old-k8s-version-676314 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node old-k8s-version-676314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node old-k8s-version-676314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node old-k8s-version-676314 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                node-controller  Node old-k8s-version-676314 event: Registered Node old-k8s-version-676314 in Controller
	
	
	==> dmesg <==
	[  +0.000024] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[Oct25 09:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[ +17.952906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 b8 8e e3 56 c9 08 06
	[  +0.000656] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[Oct25 09:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	[ +20.335832] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +1.293644] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[Oct25 09:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 68 92 7c c6 14 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +0.270958] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a d0 7b 0e 4a 8d 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[ +10.676024] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000020] ll header: 00000000: ff ff ff ff ff ff 1a 10 31 a9 02 ae 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	
	
	==> etcd [208f766f9a2264a90389e0a3255784544b9fe39f037b5319e382c5f93fe9822c] <==
	{"level":"info","ts":"2025-10-25T09:53:52.604485Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T09:53:52.604535Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T09:53:52.604598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-25T09:53:52.604718Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-25T09:53:52.604861Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T09:53:52.604933Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T09:53:52.606994Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-25T09:53:52.607114Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T09:53:52.607138Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T09:53:52.607296Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-25T09:53:52.607334Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-25T09:53:53.895746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-25T09:53:53.895797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-25T09:53:53.895835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-25T09:53:53.895856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-25T09:53:53.895864Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-25T09:53:53.895876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-25T09:53:53.89589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-25T09:53:53.897097Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T09:53:53.897116Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T09:53:53.897102Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-676314 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-25T09:53:53.897392Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-25T09:53:53.897425Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-25T09:53:53.898449Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-25T09:53:53.898571Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 09:54:50 up  1:37,  0 user,  load average: 5.75, 4.68, 2.87
	Linux old-k8s-version-676314 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [54bf43bd1d263e36fbfe11af76068cfa27fe7fa93a9489c9da3f96cb570ea54f] <==
	I1025 09:53:55.667006       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:53:55.667277       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 09:53:55.667513       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:53:55.667537       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:53:55.667565       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:53:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:53:55.874813       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:53:55.874958       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:53:55.874977       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:53:55.875266       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:53:56.266229       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:53:56.266260       1 metrics.go:72] Registering metrics
	I1025 09:53:56.266333       1 controller.go:711] "Syncing nftables rules"
	I1025 09:54:05.874220       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:54:05.874289       1 main.go:301] handling current node
	I1025 09:54:15.875422       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:54:15.875483       1 main.go:301] handling current node
	I1025 09:54:25.874848       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:54:25.874898       1 main.go:301] handling current node
	I1025 09:54:35.875786       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:54:35.875827       1 main.go:301] handling current node
	I1025 09:54:45.879745       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:54:45.879812       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1bdedceab1946592ada2ecf0f626b7e132c6c022e02bd19d57ece6929d21893a] <==
	I1025 09:53:54.932708       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:53:54.962541       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:53:54.962612       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1025 09:53:54.962724       1 shared_informer.go:318] Caches are synced for configmaps
	I1025 09:53:54.962785       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 09:53:54.962917       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1025 09:53:54.962929       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1025 09:53:54.963103       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1025 09:53:54.963172       1 aggregator.go:166] initial CRD sync complete...
	I1025 09:53:54.963179       1 autoregister_controller.go:141] Starting autoregister controller
	I1025 09:53:54.963186       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:53:54.963193       1 cache.go:39] Caches are synced for autoregister controller
	E1025 09:53:54.970282       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:53:54.986434       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1025 09:53:55.866173       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 09:53:55.866251       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:53:55.899709       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 09:53:55.918556       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:53:55.928215       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:53:55.936369       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 09:53:55.976202       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.165.139"}
	I1025 09:53:55.997323       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.155.186"}
	I1025 09:54:07.194257       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1025 09:54:07.213638       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 09:54:07.244542       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6d19999376dc5316d611262641b47285d726876bb53bf4a498c9ab5d06c8b371] <==
	I1025 09:54:07.244643       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.915µs"
	I1025 09:54:07.246589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.37µs"
	I1025 09:54:07.259204       1 shared_informer.go:318] Caches are synced for ephemeral
	I1025 09:54:07.264507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="98.308µs"
	I1025 09:54:07.293719       1 shared_informer.go:318] Caches are synced for attach detach
	I1025 09:54:07.298010       1 shared_informer.go:318] Caches are synced for service account
	I1025 09:54:07.308046       1 shared_informer.go:318] Caches are synced for disruption
	I1025 09:54:07.308218       1 shared_informer.go:318] Caches are synced for resource quota
	I1025 09:54:07.333092       1 shared_informer.go:318] Caches are synced for expand
	I1025 09:54:07.350886       1 shared_informer.go:318] Caches are synced for persistent volume
	I1025 09:54:07.352238       1 shared_informer.go:318] Caches are synced for PVC protection
	I1025 09:54:07.378208       1 shared_informer.go:318] Caches are synced for resource quota
	I1025 09:54:07.383388       1 shared_informer.go:318] Caches are synced for stateful set
	I1025 09:54:07.744461       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 09:54:07.775984       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 09:54:07.776027       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 09:54:10.109846       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="98.842µs"
	I1025 09:54:11.116519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="118.731µs"
	I1025 09:54:12.120913       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.349µs"
	I1025 09:54:14.131823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.865951ms"
	I1025 09:54:14.131953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="72.715µs"
	I1025 09:54:28.164857       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.54µs"
	I1025 09:54:33.675633       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.511289ms"
	I1025 09:54:33.675869       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.649µs"
	I1025 09:54:37.550938       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="103.744µs"
	
	
	==> kube-proxy [7e9c9db60e85d067f416ac5fcd2862f37a4db9681670c0ae9adf96066420d66d] <==
	I1025 09:53:55.497797       1 server_others.go:69] "Using iptables proxy"
	I1025 09:53:55.510879       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1025 09:53:55.533607       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:53:55.535894       1 server_others.go:152] "Using iptables Proxier"
	I1025 09:53:55.535948       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 09:53:55.535958       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 09:53:55.536001       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 09:53:55.536222       1 server.go:846] "Version info" version="v1.28.0"
	I1025 09:53:55.536237       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:53:55.536993       1 config.go:188] "Starting service config controller"
	I1025 09:53:55.537016       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 09:53:55.537058       1 config.go:97] "Starting endpoint slice config controller"
	I1025 09:53:55.537067       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 09:53:55.537104       1 config.go:315] "Starting node config controller"
	I1025 09:53:55.537130       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 09:53:55.637199       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 09:53:55.637223       1 shared_informer.go:318] Caches are synced for service config
	I1025 09:53:55.637241       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e2fb3d4360165e17cdfb3eb5777d2f68e824a705c64256daac2adcefb4d9af8b] <==
	I1025 09:53:53.053597       1 serving.go:348] Generated self-signed cert in-memory
	W1025 09:53:54.924963       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:53:54.925001       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:53:54.925015       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:53:54.925026       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:53:54.937890       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1025 09:53:54.937920       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:53:54.939589       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:53:54.939632       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 09:53:54.940690       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1025 09:53:54.940755       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 09:53:55.040438       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 25 09:54:07 old-k8s-version-676314 kubelet[714]: I1025 09:54:07.233485     714 topology_manager.go:215] "Topology Admit Handler" podUID="ae3c5da2-5fac-478b-9103-c4bd88f9dd6d" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-gx27q"
	Oct 25 09:54:07 old-k8s-version-676314 kubelet[714]: I1025 09:54:07.362326     714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74nnz\" (UniqueName: \"kubernetes.io/projected/ae3c5da2-5fac-478b-9103-c4bd88f9dd6d-kube-api-access-74nnz\") pod \"dashboard-metrics-scraper-5f989dc9cf-gx27q\" (UID: \"ae3c5da2-5fac-478b-9103-c4bd88f9dd6d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q"
	Oct 25 09:54:07 old-k8s-version-676314 kubelet[714]: I1025 09:54:07.362434     714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ae3c5da2-5fac-478b-9103-c4bd88f9dd6d-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-gx27q\" (UID: \"ae3c5da2-5fac-478b-9103-c4bd88f9dd6d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q"
	Oct 25 09:54:07 old-k8s-version-676314 kubelet[714]: I1025 09:54:07.362472     714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/18009715-0497-4ac7-ae7f-2e2ec645bf27-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-q7z2s\" (UID: \"18009715-0497-4ac7-ae7f-2e2ec645bf27\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-q7z2s"
	Oct 25 09:54:07 old-k8s-version-676314 kubelet[714]: I1025 09:54:07.362643     714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltsrs\" (UniqueName: \"kubernetes.io/projected/18009715-0497-4ac7-ae7f-2e2ec645bf27-kube-api-access-ltsrs\") pod \"kubernetes-dashboard-8694d4445c-q7z2s\" (UID: \"18009715-0497-4ac7-ae7f-2e2ec645bf27\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-q7z2s"
	Oct 25 09:54:10 old-k8s-version-676314 kubelet[714]: I1025 09:54:10.098076     714 scope.go:117] "RemoveContainer" containerID="0d9e786aabc5b5820dc45e579aa03fd62cd695a97715405b1a01b620244f5182"
	Oct 25 09:54:11 old-k8s-version-676314 kubelet[714]: I1025 09:54:11.102949     714 scope.go:117] "RemoveContainer" containerID="0d9e786aabc5b5820dc45e579aa03fd62cd695a97715405b1a01b620244f5182"
	Oct 25 09:54:11 old-k8s-version-676314 kubelet[714]: I1025 09:54:11.103314     714 scope.go:117] "RemoveContainer" containerID="af6cf5cd96943c39604558d408e004d24f7fbfb6e6df060d167579a1b7d42f98"
	Oct 25 09:54:11 old-k8s-version-676314 kubelet[714]: E1025 09:54:11.103732     714 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-gx27q_kubernetes-dashboard(ae3c5da2-5fac-478b-9103-c4bd88f9dd6d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q" podUID="ae3c5da2-5fac-478b-9103-c4bd88f9dd6d"
	Oct 25 09:54:12 old-k8s-version-676314 kubelet[714]: I1025 09:54:12.107413     714 scope.go:117] "RemoveContainer" containerID="af6cf5cd96943c39604558d408e004d24f7fbfb6e6df060d167579a1b7d42f98"
	Oct 25 09:54:12 old-k8s-version-676314 kubelet[714]: E1025 09:54:12.107811     714 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-gx27q_kubernetes-dashboard(ae3c5da2-5fac-478b-9103-c4bd88f9dd6d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q" podUID="ae3c5da2-5fac-478b-9103-c4bd88f9dd6d"
	Oct 25 09:54:14 old-k8s-version-676314 kubelet[714]: I1025 09:54:14.125068     714 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-q7z2s" podStartSLOduration=1.131901286 podCreationTimestamp="2025-10-25 09:54:07 +0000 UTC" firstStartedPulling="2025-10-25 09:54:07.582546057 +0000 UTC m=+15.658321627" lastFinishedPulling="2025-10-25 09:54:13.575633331 +0000 UTC m=+21.651408917" observedRunningTime="2025-10-25 09:54:14.124622375 +0000 UTC m=+22.200397964" watchObservedRunningTime="2025-10-25 09:54:14.124988576 +0000 UTC m=+22.200764164"
	Oct 25 09:54:17 old-k8s-version-676314 kubelet[714]: I1025 09:54:17.535072     714 scope.go:117] "RemoveContainer" containerID="af6cf5cd96943c39604558d408e004d24f7fbfb6e6df060d167579a1b7d42f98"
	Oct 25 09:54:17 old-k8s-version-676314 kubelet[714]: E1025 09:54:17.535426     714 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-gx27q_kubernetes-dashboard(ae3c5da2-5fac-478b-9103-c4bd88f9dd6d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q" podUID="ae3c5da2-5fac-478b-9103-c4bd88f9dd6d"
	Oct 25 09:54:26 old-k8s-version-676314 kubelet[714]: I1025 09:54:26.142637     714 scope.go:117] "RemoveContainer" containerID="672fe80d5a9e8a660c7eeaa5838bb3818c5b279a10306e12acf11595a752ce55"
	Oct 25 09:54:28 old-k8s-version-676314 kubelet[714]: I1025 09:54:28.027276     714 scope.go:117] "RemoveContainer" containerID="af6cf5cd96943c39604558d408e004d24f7fbfb6e6df060d167579a1b7d42f98"
	Oct 25 09:54:28 old-k8s-version-676314 kubelet[714]: I1025 09:54:28.151879     714 scope.go:117] "RemoveContainer" containerID="af6cf5cd96943c39604558d408e004d24f7fbfb6e6df060d167579a1b7d42f98"
	Oct 25 09:54:28 old-k8s-version-676314 kubelet[714]: I1025 09:54:28.152148     714 scope.go:117] "RemoveContainer" containerID="ce0180eba192d9b1f16b7605a2d136a4e464e1a0ac44966da13f989f9f83875a"
	Oct 25 09:54:28 old-k8s-version-676314 kubelet[714]: E1025 09:54:28.152542     714 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-gx27q_kubernetes-dashboard(ae3c5da2-5fac-478b-9103-c4bd88f9dd6d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q" podUID="ae3c5da2-5fac-478b-9103-c4bd88f9dd6d"
	Oct 25 09:54:37 old-k8s-version-676314 kubelet[714]: I1025 09:54:37.535189     714 scope.go:117] "RemoveContainer" containerID="ce0180eba192d9b1f16b7605a2d136a4e464e1a0ac44966da13f989f9f83875a"
	Oct 25 09:54:37 old-k8s-version-676314 kubelet[714]: E1025 09:54:37.535662     714 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-gx27q_kubernetes-dashboard(ae3c5da2-5fac-478b-9103-c4bd88f9dd6d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q" podUID="ae3c5da2-5fac-478b-9103-c4bd88f9dd6d"
	Oct 25 09:54:47 old-k8s-version-676314 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:54:47 old-k8s-version-676314 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:54:47 old-k8s-version-676314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 09:54:47 old-k8s-version-676314 systemd[1]: kubelet.service: Consumed 1.653s CPU time.
	
	
	==> kubernetes-dashboard [32df4b3331837228098cd723b7cc594d68d17397153f50ef57dfee6afc0cfab0] <==
	2025/10/25 09:54:13 Using namespace: kubernetes-dashboard
	2025/10/25 09:54:13 Using in-cluster config to connect to apiserver
	2025/10/25 09:54:13 Using secret token for csrf signing
	2025/10/25 09:54:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:54:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:54:13 Successful initial request to the apiserver, version: v1.28.0
	2025/10/25 09:54:13 Generating JWE encryption key
	2025/10/25 09:54:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:54:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:54:13 Initializing JWE encryption key from synchronized object
	2025/10/25 09:54:13 Creating in-cluster Sidecar client
	2025/10/25 09:54:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:54:13 Serving insecurely on HTTP port: 9090
	2025/10/25 09:54:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:54:13 Starting overwatch
	
	
	==> storage-provisioner [13d9708c5d841b464c18aa3085829c1ab76dddd4a9cf55a722726920eebfa86f] <==
	I1025 09:54:26.191544       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:54:26.200809       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:54:26.200870       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 09:54:43.599232       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:54:43.599398       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"830beb51-92da-458c-968a-0c40cd8858b2", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-676314_32195fa2-366b-4f41-8d79-ca565d0310a6 became leader
	I1025 09:54:43.599475       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-676314_32195fa2-366b-4f41-8d79-ca565d0310a6!
	I1025 09:54:43.700447       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-676314_32195fa2-366b-4f41-8d79-ca565d0310a6!
	
	
	==> storage-provisioner [672fe80d5a9e8a660c7eeaa5838bb3818c5b279a10306e12acf11595a752ce55] <==
	I1025 09:53:55.437300       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:54:25.440046       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-676314 -n old-k8s-version-676314
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-676314 -n old-k8s-version-676314: exit status 2 (358.958289ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-676314 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-676314
helpers_test.go:243: (dbg) docker inspect old-k8s-version-676314:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7",
	        "Created": "2025-10-25T09:52:30.302289758Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 442093,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:53:45.622107147Z",
	            "FinishedAt": "2025-10-25T09:53:43.51249455Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7/hosts",
	        "LogPath": "/var/lib/docker/containers/05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7/05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7-json.log",
	        "Name": "/old-k8s-version-676314",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-676314:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-676314",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05255cf7a9be6883ee86415520836bb3a26adcadc5b2b95d2dbb6e06cc7b71b7",
	                "LowerDir": "/var/lib/docker/overlay2/ee55f66edc956ba04d8a48ac2f58334c6be8a80c382de1ca530ee94ac23a8ce7-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ee55f66edc956ba04d8a48ac2f58334c6be8a80c382de1ca530ee94ac23a8ce7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ee55f66edc956ba04d8a48ac2f58334c6be8a80c382de1ca530ee94ac23a8ce7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ee55f66edc956ba04d8a48ac2f58334c6be8a80c382de1ca530ee94ac23a8ce7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-676314",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-676314/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-676314",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-676314",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-676314",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9dcc2b233f31b8e3ca5ec197e2e4c82058e4362ca9082e4e54f9bb21d047d45d",
	            "SandboxKey": "/var/run/docker/netns/9dcc2b233f31",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33240"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33241"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33244"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33242"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33243"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-676314": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:57:0b:95:cd:43",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f66217c06b76e94123bb60007cf891525ec1407362c18c5530791b0803181dbc",
	                    "EndpointID": "2582cb626d770d64ca1bb88d8e662d60dd3abb5ffe12c0ebe4b8f8af33a141ed",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-676314",
	                        "05255cf7a9be"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-676314 -n old-k8s-version-676314
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-676314 -n old-k8s-version-676314: exit status 2 (327.542439ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-676314 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-676314 logs -n 25: (1.06210575s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p newest-cni-042675 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ delete  │ -p newest-cni-042675                                                                                                                                                                                                                          │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable metrics-server -p no-preload-656799 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ delete  │ -p newest-cni-042675                                                                                                                                                                                                                          │ newest-cni-042675            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ delete  │ -p disable-driver-mounts-001549                                                                                                                                                                                                               │ disable-driver-mounts-001549 │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p embed-certs-846915 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ stop    │ -p no-preload-656799 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-676314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p old-k8s-version-676314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-880773 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-656799 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p no-preload-656799 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ stop    │ -p default-k8s-diff-port-880773 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-880773 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ start   │ -p default-k8s-diff-port-880773 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-846915 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ stop    │ -p embed-certs-846915 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ image   │ no-preload-656799 image list --format=json                                                                                                                                                                                                    │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ pause   │ -p no-preload-656799 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ delete  │ -p no-preload-656799                                                                                                                                                                                                                          │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ delete  │ -p no-preload-656799                                                                                                                                                                                                                          │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ image   │ old-k8s-version-676314 image list --format=json                                                                                                                                                                                               │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ pause   │ -p old-k8s-version-676314 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-846915 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ start   │ -p embed-certs-846915 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:54:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:54:50.490480  457008 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:54:50.490778  457008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:50.490791  457008 out.go:374] Setting ErrFile to fd 2...
	I1025 09:54:50.490795  457008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:50.491023  457008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:54:50.491458  457008 out.go:368] Setting JSON to false
	I1025 09:54:50.492784  457008 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5834,"bootTime":1761380256,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:54:50.492874  457008 start.go:141] virtualization: kvm guest
	I1025 09:54:50.494727  457008 out.go:179] * [embed-certs-846915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:54:50.495938  457008 notify.go:220] Checking for updates...
	I1025 09:54:50.495955  457008 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:54:50.497200  457008 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:54:50.498359  457008 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:50.499624  457008 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:54:50.500821  457008 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:54:50.501999  457008 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:54:50.503677  457008 config.go:182] Loaded profile config "embed-certs-846915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:50.504213  457008 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:54:50.529014  457008 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:54:50.529154  457008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:50.591445  457008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-25 09:54:50.580621433 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:50.591560  457008 docker.go:318] overlay module found
	I1025 09:54:50.592851  457008 out.go:179] * Using the docker driver based on existing profile
	I1025 09:54:50.593988  457008 start.go:305] selected driver: docker
	I1025 09:54:50.594007  457008 start.go:925] validating driver "docker" against &{Name:embed-certs-846915 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:50.594132  457008 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:54:50.594767  457008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:50.658713  457008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-25 09:54:50.645802852 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:50.659072  457008 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:54:50.659108  457008 cni.go:84] Creating CNI manager for ""
	I1025 09:54:50.659179  457008 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:54:50.659237  457008 start.go:349] cluster config:
	{Name:embed-certs-846915 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:50.660979  457008 out.go:179] * Starting "embed-certs-846915" primary control-plane node in "embed-certs-846915" cluster
	I1025 09:54:50.662225  457008 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:54:50.663491  457008 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:54:50.664700  457008 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:54:50.664762  457008 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:54:50.664778  457008 cache.go:58] Caching tarball of preloaded images
	I1025 09:54:50.664819  457008 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:54:50.664906  457008 preload.go:233] Found /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:54:50.664923  457008 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:54:50.665060  457008 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/config.json ...
	I1025 09:54:50.686709  457008 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:54:50.686734  457008 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:54:50.686758  457008 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:54:50.686788  457008 start.go:360] acquireMachinesLock for embed-certs-846915: {Name:mk6afaad62774c341d106d1a8d37743a274e5cb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:54:50.686902  457008 start.go:364] duration metric: took 69.005µs to acquireMachinesLock for "embed-certs-846915"
	I1025 09:54:50.686926  457008 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:54:50.686937  457008 fix.go:54] fixHost starting: 
	I1025 09:54:50.687222  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:50.706726  457008 fix.go:112] recreateIfNeeded on embed-certs-846915: state=Stopped err=<nil>
	W1025 09:54:50.706755  457008 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 25 09:54:13 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:13.61112898Z" level=info msg="Created container 32df4b3331837228098cd723b7cc594d68d17397153f50ef57dfee6afc0cfab0: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-q7z2s/kubernetes-dashboard" id=b657cbcb-dc88-4516-82f9-a3e5034f8c2c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:13 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:13.6117702Z" level=info msg="Starting container: 32df4b3331837228098cd723b7cc594d68d17397153f50ef57dfee6afc0cfab0" id=a54e9b9d-1cbc-4a4a-9ec4-c2126cb27cd1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:54:13 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:13.613457993Z" level=info msg="Started container" PID=1730 containerID=32df4b3331837228098cd723b7cc594d68d17397153f50ef57dfee6afc0cfab0 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-q7z2s/kubernetes-dashboard id=a54e9b9d-1cbc-4a4a-9ec4-c2126cb27cd1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bb54e83ad2874e7750eb8f07df38a8ae8c27732ad75b9d2d2104d20cf4e8e4cf
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.143071375Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=039a1649-67a7-46a9-acb1-25bc0d85a58f name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.144087774Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=30a79f9f-9fd0-4755-9d0e-3a40413f043a name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.145094958Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6a0b97c4-ef7c-4ad5-81e2-63cdebfe9c92 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.145226211Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.150607536Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.150812559Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/77c9c00d8445b5fa091e6c5df35c02c0c88caf68fc3b946e9e17aa71ac50b588/merged/etc/passwd: no such file or directory"
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.150846138Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/77c9c00d8445b5fa091e6c5df35c02c0c88caf68fc3b946e9e17aa71ac50b588/merged/etc/group: no such file or directory"
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.151113043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.175520388Z" level=info msg="Created container 13d9708c5d841b464c18aa3085829c1ab76dddd4a9cf55a722726920eebfa86f: kube-system/storage-provisioner/storage-provisioner" id=6a0b97c4-ef7c-4ad5-81e2-63cdebfe9c92 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.176149465Z" level=info msg="Starting container: 13d9708c5d841b464c18aa3085829c1ab76dddd4a9cf55a722726920eebfa86f" id=27e66b2f-b20a-4cff-ad49-7feee1437b1a name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:54:26 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:26.178433859Z" level=info msg="Started container" PID=1754 containerID=13d9708c5d841b464c18aa3085829c1ab76dddd4a9cf55a722726920eebfa86f description=kube-system/storage-provisioner/storage-provisioner id=27e66b2f-b20a-4cff-ad49-7feee1437b1a name=/runtime.v1.RuntimeService/StartContainer sandboxID=f08e6704603faeee60ac35ecfde218631d6f2a6cdfe7025ef302baa3b96a6549
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.027938818Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7f243b7b-ef0a-4b4f-b41f-66ccc7c0d2e1 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.028879049Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3d6ae517-a31c-4020-8c95-6c9640a9cf41 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.029955612Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q/dashboard-metrics-scraper" id=498cce77-c4f7-4e83-8163-a894a0cae621 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.030140146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.036816419Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.037453033Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.080338658Z" level=info msg="Created container ce0180eba192d9b1f16b7605a2d136a4e464e1a0ac44966da13f989f9f83875a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q/dashboard-metrics-scraper" id=498cce77-c4f7-4e83-8163-a894a0cae621 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.08097355Z" level=info msg="Starting container: ce0180eba192d9b1f16b7605a2d136a4e464e1a0ac44966da13f989f9f83875a" id=fd45845e-b408-46f1-98e8-641d2ac5ae37 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.082885211Z" level=info msg="Started container" PID=1770 containerID=ce0180eba192d9b1f16b7605a2d136a4e464e1a0ac44966da13f989f9f83875a description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q/dashboard-metrics-scraper id=fd45845e-b408-46f1-98e8-641d2ac5ae37 name=/runtime.v1.RuntimeService/StartContainer sandboxID=18f5ffd9a1d747f9af6b6eb16c606057fe36884904934d9d73210302d5071330
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.153074934Z" level=info msg="Removing container: af6cf5cd96943c39604558d408e004d24f7fbfb6e6df060d167579a1b7d42f98" id=17485b4d-f56d-43f2-b605-16a8e859ea71 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:54:28 old-k8s-version-676314 crio[559]: time="2025-10-25T09:54:28.163971835Z" level=info msg="Removed container af6cf5cd96943c39604558d408e004d24f7fbfb6e6df060d167579a1b7d42f98: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q/dashboard-metrics-scraper" id=17485b4d-f56d-43f2-b605-16a8e859ea71 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	ce0180eba192d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   18f5ffd9a1d74       dashboard-metrics-scraper-5f989dc9cf-gx27q       kubernetes-dashboard
	13d9708c5d841       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago      Running             storage-provisioner         1                   f08e6704603fa       storage-provisioner                              kube-system
	32df4b3331837       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   bb54e83ad2874       kubernetes-dashboard-8694d4445c-q7z2s            kubernetes-dashboard
	44e78a06fe8d3       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           56 seconds ago      Running             coredns                     0                   9ab60ce544b7c       coredns-5dd5756b68-qffxt                         kube-system
	9ef737de195e6       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   9238f643c231d       busybox                                          default
	54bf43bd1d263       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   1f9a5977383c1       kindnet-5hnxc                                    kube-system
	672fe80d5a9e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   f08e6704603fa       storage-provisioner                              kube-system
	7e9c9db60e85d       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           56 seconds ago      Running             kube-proxy                  0                   3a472f87f5070       kube-proxy-bsxx6                                 kube-system
	1bdedceab1946       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           59 seconds ago      Running             kube-apiserver              0                   9b882f0cfea3c       kube-apiserver-old-k8s-version-676314            kube-system
	208f766f9a226       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           59 seconds ago      Running             etcd                        0                   2dfb5118163a1       etcd-old-k8s-version-676314                      kube-system
	e2fb3d4360165       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           59 seconds ago      Running             kube-scheduler              0                   c99eda1634540       kube-scheduler-old-k8s-version-676314            kube-system
	6d19999376dc5       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           59 seconds ago      Running             kube-controller-manager     0                   64c8b58390c3d       kube-controller-manager-old-k8s-version-676314   kube-system
	
	
	==> coredns [44e78a06fe8d3364412a49fe97c33eb05da0e0b00edd440ec10e521482e09243] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47099 - 32185 "HINFO IN 1945696538429773668.5261800709378654143. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025319975s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-676314
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-676314
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=old-k8s-version-676314
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_52_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:52:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-676314
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:54:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:54:25 +0000   Sat, 25 Oct 2025 09:52:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:54:25 +0000   Sat, 25 Oct 2025 09:52:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:54:25 +0000   Sat, 25 Oct 2025 09:52:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:54:25 +0000   Sat, 25 Oct 2025 09:53:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-676314
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                91553f51-64a8-4128-a815-5ed176c5ea05
	  Boot ID:                    69cac88c-fbae-449a-9884-8eb99653f5b9
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-5dd5756b68-qffxt                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-old-k8s-version-676314                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m6s
	  kube-system                 kindnet-5hnxc                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-old-k8s-version-676314             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-old-k8s-version-676314    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-bsxx6                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-old-k8s-version-676314             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-gx27q        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-q7z2s             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 112s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m6s               kubelet          Node old-k8s-version-676314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s               kubelet          Node old-k8s-version-676314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s               kubelet          Node old-k8s-version-676314 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m6s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           114s               node-controller  Node old-k8s-version-676314 event: Registered Node old-k8s-version-676314 in Controller
	  Normal  NodeReady                100s               kubelet          Node old-k8s-version-676314 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node old-k8s-version-676314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node old-k8s-version-676314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node old-k8s-version-676314 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                node-controller  Node old-k8s-version-676314 event: Registered Node old-k8s-version-676314 in Controller
	
	
	==> dmesg <==
	[  +0.000024] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[Oct25 09:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[ +17.952906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 b8 8e e3 56 c9 08 06
	[  +0.000656] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[Oct25 09:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	[ +20.335832] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +1.293644] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[Oct25 09:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 68 92 7c c6 14 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +0.270958] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a d0 7b 0e 4a 8d 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[ +10.676024] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000020] ll header: 00000000: ff ff ff ff ff ff 1a 10 31 a9 02 ae 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	
	
	==> etcd [208f766f9a2264a90389e0a3255784544b9fe39f037b5319e382c5f93fe9822c] <==
	{"level":"info","ts":"2025-10-25T09:53:52.604485Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T09:53:52.604535Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T09:53:52.604598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-25T09:53:52.604718Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-25T09:53:52.604861Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T09:53:52.604933Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T09:53:52.606994Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-25T09:53:52.607114Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T09:53:52.607138Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T09:53:52.607296Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-25T09:53:52.607334Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-25T09:53:53.895746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-25T09:53:53.895797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-25T09:53:53.895835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-25T09:53:53.895856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-25T09:53:53.895864Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-25T09:53:53.895876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-25T09:53:53.89589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-25T09:53:53.897097Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T09:53:53.897116Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T09:53:53.897102Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-676314 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-25T09:53:53.897392Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-25T09:53:53.897425Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-25T09:53:53.898449Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-25T09:53:53.898571Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 09:54:52 up  1:37,  0 user,  load average: 5.75, 4.68, 2.87
	Linux old-k8s-version-676314 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [54bf43bd1d263e36fbfe11af76068cfa27fe7fa93a9489c9da3f96cb570ea54f] <==
	I1025 09:53:55.667006       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:53:55.667277       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 09:53:55.667513       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:53:55.667537       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:53:55.667565       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:53:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:53:55.874813       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:53:55.874958       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:53:55.874977       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:53:55.875266       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:53:56.266229       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:53:56.266260       1 metrics.go:72] Registering metrics
	I1025 09:53:56.266333       1 controller.go:711] "Syncing nftables rules"
	I1025 09:54:05.874220       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:54:05.874289       1 main.go:301] handling current node
	I1025 09:54:15.875422       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:54:15.875483       1 main.go:301] handling current node
	I1025 09:54:25.874848       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:54:25.874898       1 main.go:301] handling current node
	I1025 09:54:35.875786       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:54:35.875827       1 main.go:301] handling current node
	I1025 09:54:45.879745       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:54:45.879812       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1bdedceab1946592ada2ecf0f626b7e132c6c022e02bd19d57ece6929d21893a] <==
	I1025 09:53:54.932708       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:53:54.962541       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:53:54.962612       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1025 09:53:54.962724       1 shared_informer.go:318] Caches are synced for configmaps
	I1025 09:53:54.962785       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 09:53:54.962917       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1025 09:53:54.962929       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1025 09:53:54.963103       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1025 09:53:54.963172       1 aggregator.go:166] initial CRD sync complete...
	I1025 09:53:54.963179       1 autoregister_controller.go:141] Starting autoregister controller
	I1025 09:53:54.963186       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:53:54.963193       1 cache.go:39] Caches are synced for autoregister controller
	E1025 09:53:54.970282       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:53:54.986434       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1025 09:53:55.866173       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 09:53:55.866251       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:53:55.899709       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 09:53:55.918556       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:53:55.928215       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:53:55.936369       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 09:53:55.976202       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.165.139"}
	I1025 09:53:55.997323       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.155.186"}
	I1025 09:54:07.194257       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1025 09:54:07.213638       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 09:54:07.244542       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6d19999376dc5316d611262641b47285d726876bb53bf4a498c9ab5d06c8b371] <==
	I1025 09:54:07.244643       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.915µs"
	I1025 09:54:07.246589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.37µs"
	I1025 09:54:07.259204       1 shared_informer.go:318] Caches are synced for ephemeral
	I1025 09:54:07.264507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="98.308µs"
	I1025 09:54:07.293719       1 shared_informer.go:318] Caches are synced for attach detach
	I1025 09:54:07.298010       1 shared_informer.go:318] Caches are synced for service account
	I1025 09:54:07.308046       1 shared_informer.go:318] Caches are synced for disruption
	I1025 09:54:07.308218       1 shared_informer.go:318] Caches are synced for resource quota
	I1025 09:54:07.333092       1 shared_informer.go:318] Caches are synced for expand
	I1025 09:54:07.350886       1 shared_informer.go:318] Caches are synced for persistent volume
	I1025 09:54:07.352238       1 shared_informer.go:318] Caches are synced for PVC protection
	I1025 09:54:07.378208       1 shared_informer.go:318] Caches are synced for resource quota
	I1025 09:54:07.383388       1 shared_informer.go:318] Caches are synced for stateful set
	I1025 09:54:07.744461       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 09:54:07.775984       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 09:54:07.776027       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 09:54:10.109846       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="98.842µs"
	I1025 09:54:11.116519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="118.731µs"
	I1025 09:54:12.120913       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.349µs"
	I1025 09:54:14.131823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.865951ms"
	I1025 09:54:14.131953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="72.715µs"
	I1025 09:54:28.164857       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.54µs"
	I1025 09:54:33.675633       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.511289ms"
	I1025 09:54:33.675869       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.649µs"
	I1025 09:54:37.550938       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="103.744µs"
	
	
	==> kube-proxy [7e9c9db60e85d067f416ac5fcd2862f37a4db9681670c0ae9adf96066420d66d] <==
	I1025 09:53:55.497797       1 server_others.go:69] "Using iptables proxy"
	I1025 09:53:55.510879       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1025 09:53:55.533607       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:53:55.535894       1 server_others.go:152] "Using iptables Proxier"
	I1025 09:53:55.535948       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 09:53:55.535958       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 09:53:55.536001       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 09:53:55.536222       1 server.go:846] "Version info" version="v1.28.0"
	I1025 09:53:55.536237       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:53:55.536993       1 config.go:188] "Starting service config controller"
	I1025 09:53:55.537016       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 09:53:55.537058       1 config.go:97] "Starting endpoint slice config controller"
	I1025 09:53:55.537067       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 09:53:55.537104       1 config.go:315] "Starting node config controller"
	I1025 09:53:55.537130       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 09:53:55.637199       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 09:53:55.637223       1 shared_informer.go:318] Caches are synced for service config
	I1025 09:53:55.637241       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e2fb3d4360165e17cdfb3eb5777d2f68e824a705c64256daac2adcefb4d9af8b] <==
	I1025 09:53:53.053597       1 serving.go:348] Generated self-signed cert in-memory
	W1025 09:53:54.924963       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:53:54.925001       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:53:54.925015       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:53:54.925026       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:53:54.937890       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1025 09:53:54.937920       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:53:54.939589       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:53:54.939632       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 09:53:54.940690       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1025 09:53:54.940755       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 09:53:55.040438       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 25 09:54:07 old-k8s-version-676314 kubelet[714]: I1025 09:54:07.233485     714 topology_manager.go:215] "Topology Admit Handler" podUID="ae3c5da2-5fac-478b-9103-c4bd88f9dd6d" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-gx27q"
	Oct 25 09:54:07 old-k8s-version-676314 kubelet[714]: I1025 09:54:07.362326     714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74nnz\" (UniqueName: \"kubernetes.io/projected/ae3c5da2-5fac-478b-9103-c4bd88f9dd6d-kube-api-access-74nnz\") pod \"dashboard-metrics-scraper-5f989dc9cf-gx27q\" (UID: \"ae3c5da2-5fac-478b-9103-c4bd88f9dd6d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q"
	Oct 25 09:54:07 old-k8s-version-676314 kubelet[714]: I1025 09:54:07.362434     714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ae3c5da2-5fac-478b-9103-c4bd88f9dd6d-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-gx27q\" (UID: \"ae3c5da2-5fac-478b-9103-c4bd88f9dd6d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q"
	Oct 25 09:54:07 old-k8s-version-676314 kubelet[714]: I1025 09:54:07.362472     714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/18009715-0497-4ac7-ae7f-2e2ec645bf27-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-q7z2s\" (UID: \"18009715-0497-4ac7-ae7f-2e2ec645bf27\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-q7z2s"
	Oct 25 09:54:07 old-k8s-version-676314 kubelet[714]: I1025 09:54:07.362643     714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltsrs\" (UniqueName: \"kubernetes.io/projected/18009715-0497-4ac7-ae7f-2e2ec645bf27-kube-api-access-ltsrs\") pod \"kubernetes-dashboard-8694d4445c-q7z2s\" (UID: \"18009715-0497-4ac7-ae7f-2e2ec645bf27\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-q7z2s"
	Oct 25 09:54:10 old-k8s-version-676314 kubelet[714]: I1025 09:54:10.098076     714 scope.go:117] "RemoveContainer" containerID="0d9e786aabc5b5820dc45e579aa03fd62cd695a97715405b1a01b620244f5182"
	Oct 25 09:54:11 old-k8s-version-676314 kubelet[714]: I1025 09:54:11.102949     714 scope.go:117] "RemoveContainer" containerID="0d9e786aabc5b5820dc45e579aa03fd62cd695a97715405b1a01b620244f5182"
	Oct 25 09:54:11 old-k8s-version-676314 kubelet[714]: I1025 09:54:11.103314     714 scope.go:117] "RemoveContainer" containerID="af6cf5cd96943c39604558d408e004d24f7fbfb6e6df060d167579a1b7d42f98"
	Oct 25 09:54:11 old-k8s-version-676314 kubelet[714]: E1025 09:54:11.103732     714 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-gx27q_kubernetes-dashboard(ae3c5da2-5fac-478b-9103-c4bd88f9dd6d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q" podUID="ae3c5da2-5fac-478b-9103-c4bd88f9dd6d"
	Oct 25 09:54:12 old-k8s-version-676314 kubelet[714]: I1025 09:54:12.107413     714 scope.go:117] "RemoveContainer" containerID="af6cf5cd96943c39604558d408e004d24f7fbfb6e6df060d167579a1b7d42f98"
	Oct 25 09:54:12 old-k8s-version-676314 kubelet[714]: E1025 09:54:12.107811     714 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-gx27q_kubernetes-dashboard(ae3c5da2-5fac-478b-9103-c4bd88f9dd6d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q" podUID="ae3c5da2-5fac-478b-9103-c4bd88f9dd6d"
	Oct 25 09:54:14 old-k8s-version-676314 kubelet[714]: I1025 09:54:14.125068     714 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-q7z2s" podStartSLOduration=1.131901286 podCreationTimestamp="2025-10-25 09:54:07 +0000 UTC" firstStartedPulling="2025-10-25 09:54:07.582546057 +0000 UTC m=+15.658321627" lastFinishedPulling="2025-10-25 09:54:13.575633331 +0000 UTC m=+21.651408917" observedRunningTime="2025-10-25 09:54:14.124622375 +0000 UTC m=+22.200397964" watchObservedRunningTime="2025-10-25 09:54:14.124988576 +0000 UTC m=+22.200764164"
	Oct 25 09:54:17 old-k8s-version-676314 kubelet[714]: I1025 09:54:17.535072     714 scope.go:117] "RemoveContainer" containerID="af6cf5cd96943c39604558d408e004d24f7fbfb6e6df060d167579a1b7d42f98"
	Oct 25 09:54:17 old-k8s-version-676314 kubelet[714]: E1025 09:54:17.535426     714 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-gx27q_kubernetes-dashboard(ae3c5da2-5fac-478b-9103-c4bd88f9dd6d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q" podUID="ae3c5da2-5fac-478b-9103-c4bd88f9dd6d"
	Oct 25 09:54:26 old-k8s-version-676314 kubelet[714]: I1025 09:54:26.142637     714 scope.go:117] "RemoveContainer" containerID="672fe80d5a9e8a660c7eeaa5838bb3818c5b279a10306e12acf11595a752ce55"
	Oct 25 09:54:28 old-k8s-version-676314 kubelet[714]: I1025 09:54:28.027276     714 scope.go:117] "RemoveContainer" containerID="af6cf5cd96943c39604558d408e004d24f7fbfb6e6df060d167579a1b7d42f98"
	Oct 25 09:54:28 old-k8s-version-676314 kubelet[714]: I1025 09:54:28.151879     714 scope.go:117] "RemoveContainer" containerID="af6cf5cd96943c39604558d408e004d24f7fbfb6e6df060d167579a1b7d42f98"
	Oct 25 09:54:28 old-k8s-version-676314 kubelet[714]: I1025 09:54:28.152148     714 scope.go:117] "RemoveContainer" containerID="ce0180eba192d9b1f16b7605a2d136a4e464e1a0ac44966da13f989f9f83875a"
	Oct 25 09:54:28 old-k8s-version-676314 kubelet[714]: E1025 09:54:28.152542     714 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-gx27q_kubernetes-dashboard(ae3c5da2-5fac-478b-9103-c4bd88f9dd6d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q" podUID="ae3c5da2-5fac-478b-9103-c4bd88f9dd6d"
	Oct 25 09:54:37 old-k8s-version-676314 kubelet[714]: I1025 09:54:37.535189     714 scope.go:117] "RemoveContainer" containerID="ce0180eba192d9b1f16b7605a2d136a4e464e1a0ac44966da13f989f9f83875a"
	Oct 25 09:54:37 old-k8s-version-676314 kubelet[714]: E1025 09:54:37.535662     714 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-gx27q_kubernetes-dashboard(ae3c5da2-5fac-478b-9103-c4bd88f9dd6d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-gx27q" podUID="ae3c5da2-5fac-478b-9103-c4bd88f9dd6d"
	Oct 25 09:54:47 old-k8s-version-676314 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:54:47 old-k8s-version-676314 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:54:47 old-k8s-version-676314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 09:54:47 old-k8s-version-676314 systemd[1]: kubelet.service: Consumed 1.653s CPU time.
	
	
	==> kubernetes-dashboard [32df4b3331837228098cd723b7cc594d68d17397153f50ef57dfee6afc0cfab0] <==
	2025/10/25 09:54:13 Starting overwatch
	2025/10/25 09:54:13 Using namespace: kubernetes-dashboard
	2025/10/25 09:54:13 Using in-cluster config to connect to apiserver
	2025/10/25 09:54:13 Using secret token for csrf signing
	2025/10/25 09:54:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:54:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:54:13 Successful initial request to the apiserver, version: v1.28.0
	2025/10/25 09:54:13 Generating JWE encryption key
	2025/10/25 09:54:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:54:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:54:13 Initializing JWE encryption key from synchronized object
	2025/10/25 09:54:13 Creating in-cluster Sidecar client
	2025/10/25 09:54:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:54:13 Serving insecurely on HTTP port: 9090
	2025/10/25 09:54:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [13d9708c5d841b464c18aa3085829c1ab76dddd4a9cf55a722726920eebfa86f] <==
	I1025 09:54:26.191544       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:54:26.200809       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:54:26.200870       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 09:54:43.599232       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:54:43.599398       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"830beb51-92da-458c-968a-0c40cd8858b2", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-676314_32195fa2-366b-4f41-8d79-ca565d0310a6 became leader
	I1025 09:54:43.599475       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-676314_32195fa2-366b-4f41-8d79-ca565d0310a6!
	I1025 09:54:43.700447       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-676314_32195fa2-366b-4f41-8d79-ca565d0310a6!
	
	
	==> storage-provisioner [672fe80d5a9e8a660c7eeaa5838bb3818c5b279a10306e12acf11595a752ce55] <==
	I1025 09:53:55.437300       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:54:25.440046       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-676314 -n old-k8s-version-676314
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-676314 -n old-k8s-version-676314: exit status 2 (327.983117ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-676314 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-880773 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-880773 --alsologtostderr -v=1: exit status 80 (2.422742858s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-880773 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:55:21.791634  460969 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:55:21.791929  460969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:55:21.791940  460969 out.go:374] Setting ErrFile to fd 2...
	I1025 09:55:21.791944  460969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:55:21.792151  460969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:55:21.792402  460969 out.go:368] Setting JSON to false
	I1025 09:55:21.792441  460969 mustload.go:65] Loading cluster: default-k8s-diff-port-880773
	I1025 09:55:21.792783  460969 config.go:182] Loaded profile config "default-k8s-diff-port-880773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:55:21.793188  460969 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-880773 --format={{.State.Status}}
	I1025 09:55:21.811671  460969 host.go:66] Checking if "default-k8s-diff-port-880773" exists ...
	I1025 09:55:21.811947  460969 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:55:21.867216  460969 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-25 09:55:21.857375016 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:55:21.867877  460969 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-880773 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 09:55:21.869780  460969 out.go:179] * Pausing node default-k8s-diff-port-880773 ... 
	I1025 09:55:21.870972  460969 host.go:66] Checking if "default-k8s-diff-port-880773" exists ...
	I1025 09:55:21.871222  460969 ssh_runner.go:195] Run: systemctl --version
	I1025 09:55:21.871258  460969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-880773
	I1025 09:55:21.889151  460969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/default-k8s-diff-port-880773/id_rsa Username:docker}
	I1025 09:55:21.990663  460969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:55:22.012915  460969 pause.go:52] kubelet running: true
	I1025 09:55:22.012991  460969 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:55:22.177066  460969 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:55:22.177154  460969 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:55:22.245102  460969 cri.go:89] found id: "8be1b97ae8c99fdf0cfe2552030fe0a0e942bad96c59260acda25fc373b2370f"
	I1025 09:55:22.245127  460969 cri.go:89] found id: "9e0ebd1eedf1bfec2ce3bb2e23264d77a78263d4507d8318f07c179eaf43ef90"
	I1025 09:55:22.245131  460969 cri.go:89] found id: "e5dc0927bdb5b9abca88e2f1181fcab28f3d98593d8c76d8d66e67df6c8841e7"
	I1025 09:55:22.245134  460969 cri.go:89] found id: "7d1412ad484fd280b8e475edb38e636d8b265e528fb5edc4c49694d11aa74026"
	I1025 09:55:22.245137  460969 cri.go:89] found id: "3fb115552602e82341d7e2918cd812563ad3b933adfcf256e50f6b6234235080"
	I1025 09:55:22.245140  460969 cri.go:89] found id: "8a40c304121945c99334f375a4fc8f1073390b82cca6a44c6e2b224a5804ed43"
	I1025 09:55:22.245143  460969 cri.go:89] found id: "1099e940dc59e4a7fc6edf4f82c427fc4633cbc73d1759f0ef430fccd002219f"
	I1025 09:55:22.245145  460969 cri.go:89] found id: "b7360eb6624b8284557553c607130a8087e3690512dcc9caea4351f9f876fd02"
	I1025 09:55:22.245147  460969 cri.go:89] found id: "9a7e2aef555d4452a0b73ff6d39e556aaf40affe43c7adcaf8fc119b3910c298"
	I1025 09:55:22.245153  460969 cri.go:89] found id: "ea46cdad815525a44a51551be277a793ef57b7528437ff46dd03f0c81c0b0609"
	I1025 09:55:22.245157  460969 cri.go:89] found id: "14107879e5563ed6b5a7c822a1deb19829cc37e77da237976440a7dadb7144c1"
	I1025 09:55:22.245166  460969 cri.go:89] found id: ""
	I1025 09:55:22.245203  460969 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:55:22.257136  460969 retry.go:31] will retry after 287.899607ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:55:22Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:55:22.545691  460969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:55:22.558959  460969 pause.go:52] kubelet running: false
	I1025 09:55:22.559051  460969 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:55:22.695967  460969 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:55:22.696061  460969 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:55:22.762924  460969 cri.go:89] found id: "8be1b97ae8c99fdf0cfe2552030fe0a0e942bad96c59260acda25fc373b2370f"
	I1025 09:55:22.762946  460969 cri.go:89] found id: "9e0ebd1eedf1bfec2ce3bb2e23264d77a78263d4507d8318f07c179eaf43ef90"
	I1025 09:55:22.762950  460969 cri.go:89] found id: "e5dc0927bdb5b9abca88e2f1181fcab28f3d98593d8c76d8d66e67df6c8841e7"
	I1025 09:55:22.762953  460969 cri.go:89] found id: "7d1412ad484fd280b8e475edb38e636d8b265e528fb5edc4c49694d11aa74026"
	I1025 09:55:22.762955  460969 cri.go:89] found id: "3fb115552602e82341d7e2918cd812563ad3b933adfcf256e50f6b6234235080"
	I1025 09:55:22.762958  460969 cri.go:89] found id: "8a40c304121945c99334f375a4fc8f1073390b82cca6a44c6e2b224a5804ed43"
	I1025 09:55:22.762961  460969 cri.go:89] found id: "1099e940dc59e4a7fc6edf4f82c427fc4633cbc73d1759f0ef430fccd002219f"
	I1025 09:55:22.762963  460969 cri.go:89] found id: "b7360eb6624b8284557553c607130a8087e3690512dcc9caea4351f9f876fd02"
	I1025 09:55:22.762966  460969 cri.go:89] found id: "9a7e2aef555d4452a0b73ff6d39e556aaf40affe43c7adcaf8fc119b3910c298"
	I1025 09:55:22.762972  460969 cri.go:89] found id: "ea46cdad815525a44a51551be277a793ef57b7528437ff46dd03f0c81c0b0609"
	I1025 09:55:22.762974  460969 cri.go:89] found id: "14107879e5563ed6b5a7c822a1deb19829cc37e77da237976440a7dadb7144c1"
	I1025 09:55:22.762977  460969 cri.go:89] found id: ""
	I1025 09:55:22.763016  460969 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:55:22.774726  460969 retry.go:31] will retry after 493.460059ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:55:22Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:55:23.268454  460969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:55:23.281638  460969 pause.go:52] kubelet running: false
	I1025 09:55:23.281702  460969 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:55:23.418306  460969 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:55:23.418400  460969 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:55:23.484138  460969 cri.go:89] found id: "8be1b97ae8c99fdf0cfe2552030fe0a0e942bad96c59260acda25fc373b2370f"
	I1025 09:55:23.484161  460969 cri.go:89] found id: "9e0ebd1eedf1bfec2ce3bb2e23264d77a78263d4507d8318f07c179eaf43ef90"
	I1025 09:55:23.484168  460969 cri.go:89] found id: "e5dc0927bdb5b9abca88e2f1181fcab28f3d98593d8c76d8d66e67df6c8841e7"
	I1025 09:55:23.484172  460969 cri.go:89] found id: "7d1412ad484fd280b8e475edb38e636d8b265e528fb5edc4c49694d11aa74026"
	I1025 09:55:23.484177  460969 cri.go:89] found id: "3fb115552602e82341d7e2918cd812563ad3b933adfcf256e50f6b6234235080"
	I1025 09:55:23.484182  460969 cri.go:89] found id: "8a40c304121945c99334f375a4fc8f1073390b82cca6a44c6e2b224a5804ed43"
	I1025 09:55:23.484185  460969 cri.go:89] found id: "1099e940dc59e4a7fc6edf4f82c427fc4633cbc73d1759f0ef430fccd002219f"
	I1025 09:55:23.484189  460969 cri.go:89] found id: "b7360eb6624b8284557553c607130a8087e3690512dcc9caea4351f9f876fd02"
	I1025 09:55:23.484193  460969 cri.go:89] found id: "9a7e2aef555d4452a0b73ff6d39e556aaf40affe43c7adcaf8fc119b3910c298"
	I1025 09:55:23.484219  460969 cri.go:89] found id: "ea46cdad815525a44a51551be277a793ef57b7528437ff46dd03f0c81c0b0609"
	I1025 09:55:23.484227  460969 cri.go:89] found id: "14107879e5563ed6b5a7c822a1deb19829cc37e77da237976440a7dadb7144c1"
	I1025 09:55:23.484230  460969 cri.go:89] found id: ""
	I1025 09:55:23.484269  460969 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:55:23.495942  460969 retry.go:31] will retry after 416.132656ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:55:23Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:55:23.912584  460969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:55:23.925497  460969 pause.go:52] kubelet running: false
	I1025 09:55:23.925546  460969 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:55:24.066029  460969 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:55:24.066129  460969 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:55:24.133990  460969 cri.go:89] found id: "8be1b97ae8c99fdf0cfe2552030fe0a0e942bad96c59260acda25fc373b2370f"
	I1025 09:55:24.134013  460969 cri.go:89] found id: "9e0ebd1eedf1bfec2ce3bb2e23264d77a78263d4507d8318f07c179eaf43ef90"
	I1025 09:55:24.134019  460969 cri.go:89] found id: "e5dc0927bdb5b9abca88e2f1181fcab28f3d98593d8c76d8d66e67df6c8841e7"
	I1025 09:55:24.134024  460969 cri.go:89] found id: "7d1412ad484fd280b8e475edb38e636d8b265e528fb5edc4c49694d11aa74026"
	I1025 09:55:24.134036  460969 cri.go:89] found id: "3fb115552602e82341d7e2918cd812563ad3b933adfcf256e50f6b6234235080"
	I1025 09:55:24.134042  460969 cri.go:89] found id: "8a40c304121945c99334f375a4fc8f1073390b82cca6a44c6e2b224a5804ed43"
	I1025 09:55:24.134046  460969 cri.go:89] found id: "1099e940dc59e4a7fc6edf4f82c427fc4633cbc73d1759f0ef430fccd002219f"
	I1025 09:55:24.134050  460969 cri.go:89] found id: "b7360eb6624b8284557553c607130a8087e3690512dcc9caea4351f9f876fd02"
	I1025 09:55:24.134053  460969 cri.go:89] found id: "9a7e2aef555d4452a0b73ff6d39e556aaf40affe43c7adcaf8fc119b3910c298"
	I1025 09:55:24.134060  460969 cri.go:89] found id: "ea46cdad815525a44a51551be277a793ef57b7528437ff46dd03f0c81c0b0609"
	I1025 09:55:24.134063  460969 cri.go:89] found id: "14107879e5563ed6b5a7c822a1deb19829cc37e77da237976440a7dadb7144c1"
	I1025 09:55:24.134065  460969 cri.go:89] found id: ""
	I1025 09:55:24.134105  460969 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:55:24.148730  460969 out.go:203] 
	W1025 09:55:24.150008  460969 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:55:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:55:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:55:24.150028  460969 out.go:285] * 
	* 
	W1025 09:55:24.154116  460969 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:55:24.155331  460969 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-880773 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-880773
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-880773:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97",
	        "Created": "2025-10-25T09:52:38.521061713Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 450164,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:54:19.524408975Z",
	            "FinishedAt": "2025-10-25T09:54:17.915408663Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97/hosts",
	        "LogPath": "/var/lib/docker/containers/9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97/9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97-json.log",
	        "Name": "/default-k8s-diff-port-880773",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-880773:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-880773",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97",
	                "LowerDir": "/var/lib/docker/overlay2/7406dd3ccf074a8c0d63e89c8d8fb56dbbf724c2e72ef4e5d3645a687d36caae-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7406dd3ccf074a8c0d63e89c8d8fb56dbbf724c2e72ef4e5d3645a687d36caae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7406dd3ccf074a8c0d63e89c8d8fb56dbbf724c2e72ef4e5d3645a687d36caae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7406dd3ccf074a8c0d63e89c8d8fb56dbbf724c2e72ef4e5d3645a687d36caae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-880773",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-880773/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-880773",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-880773",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-880773",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b5314118c188bd018d1fd204973f5ec858cb3018723d9cd564ceb6c9182c96fc",
	            "SandboxKey": "/var/run/docker/netns/b5314118c188",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33250"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33251"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33254"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33252"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33253"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-880773": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:d1:a4:a2:10:0a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6ddf7a97662fac8be0712f15b409763064fa73f60cb64be86aabc92b884c53a0",
	                    "EndpointID": "a3ac336efcccfdcae507230cd1f042d3b5e1e89d3d13ed261a2cfba053ff06c9",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-880773",
	                        "9f0bdf9b54bd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-880773 -n default-k8s-diff-port-880773
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-880773 -n default-k8s-diff-port-880773: exit status 2 (327.685928ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-880773 logs -n 25
E1025 09:55:25.270157  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/kindnet-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-880773 logs -n 25: (1.084503268s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-001549                                                                                                                                                                                                               │ disable-driver-mounts-001549 │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p embed-certs-846915 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ stop    │ -p no-preload-656799 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-676314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p old-k8s-version-676314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-880773 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-656799 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p no-preload-656799 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ stop    │ -p default-k8s-diff-port-880773 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-880773 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ start   │ -p default-k8s-diff-port-880773 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-846915 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ stop    │ -p embed-certs-846915 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ image   │ no-preload-656799 image list --format=json                                                                                                                                                                                                    │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ pause   │ -p no-preload-656799 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ delete  │ -p no-preload-656799                                                                                                                                                                                                                          │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ delete  │ -p no-preload-656799                                                                                                                                                                                                                          │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ image   │ old-k8s-version-676314 image list --format=json                                                                                                                                                                                               │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ pause   │ -p old-k8s-version-676314 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-846915 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ start   │ -p embed-certs-846915 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ delete  │ -p old-k8s-version-676314                                                                                                                                                                                                                     │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ delete  │ -p old-k8s-version-676314                                                                                                                                                                                                                     │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ image   │ default-k8s-diff-port-880773 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │ 25 Oct 25 09:55 UTC │
	│ pause   │ -p default-k8s-diff-port-880773 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:54:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:54:50.490480  457008 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:54:50.490778  457008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:50.490791  457008 out.go:374] Setting ErrFile to fd 2...
	I1025 09:54:50.490795  457008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:50.491023  457008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:54:50.491458  457008 out.go:368] Setting JSON to false
	I1025 09:54:50.492784  457008 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5834,"bootTime":1761380256,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:54:50.492874  457008 start.go:141] virtualization: kvm guest
	I1025 09:54:50.494727  457008 out.go:179] * [embed-certs-846915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:54:50.495938  457008 notify.go:220] Checking for updates...
	I1025 09:54:50.495955  457008 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:54:50.497200  457008 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:54:50.498359  457008 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:50.499624  457008 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:54:50.500821  457008 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:54:50.501999  457008 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:54:50.503677  457008 config.go:182] Loaded profile config "embed-certs-846915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:50.504213  457008 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:54:50.529014  457008 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:54:50.529154  457008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:50.591445  457008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-25 09:54:50.580621433 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:50.591560  457008 docker.go:318] overlay module found
	I1025 09:54:50.592851  457008 out.go:179] * Using the docker driver based on existing profile
	I1025 09:54:50.593988  457008 start.go:305] selected driver: docker
	I1025 09:54:50.594007  457008 start.go:925] validating driver "docker" against &{Name:embed-certs-846915 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:50.594132  457008 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:54:50.594767  457008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:50.658713  457008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-25 09:54:50.645802852 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:50.659072  457008 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:54:50.659108  457008 cni.go:84] Creating CNI manager for ""
	I1025 09:54:50.659179  457008 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:54:50.659237  457008 start.go:349] cluster config:
	{Name:embed-certs-846915 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:50.660979  457008 out.go:179] * Starting "embed-certs-846915" primary control-plane node in "embed-certs-846915" cluster
	I1025 09:54:50.662225  457008 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:54:50.663491  457008 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:54:50.664700  457008 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:54:50.664762  457008 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:54:50.664778  457008 cache.go:58] Caching tarball of preloaded images
	I1025 09:54:50.664819  457008 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:54:50.664906  457008 preload.go:233] Found /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:54:50.664923  457008 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:54:50.665060  457008 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/config.json ...
	I1025 09:54:50.686709  457008 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:54:50.686734  457008 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:54:50.686758  457008 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:54:50.686788  457008 start.go:360] acquireMachinesLock for embed-certs-846915: {Name:mk6afaad62774c341d106d1a8d37743a274e5cb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:54:50.686902  457008 start.go:364] duration metric: took 69.005µs to acquireMachinesLock for "embed-certs-846915"
	I1025 09:54:50.686926  457008 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:54:50.686937  457008 fix.go:54] fixHost starting: 
	I1025 09:54:50.687222  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:50.706726  457008 fix.go:112] recreateIfNeeded on embed-certs-846915: state=Stopped err=<nil>
	W1025 09:54:50.706755  457008 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 09:54:50.550561  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:54:53.049954  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	I1025 09:54:50.708166  457008 out.go:252] * Restarting existing docker container for "embed-certs-846915" ...
	I1025 09:54:50.708247  457008 cli_runner.go:164] Run: docker start embed-certs-846915
	I1025 09:54:50.967025  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:50.987855  457008 kic.go:430] container "embed-certs-846915" state is running.
	I1025 09:54:50.988396  457008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-846915
	I1025 09:54:51.010564  457008 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/config.json ...
	I1025 09:54:51.010825  457008 machine.go:93] provisionDockerMachine start ...
	I1025 09:54:51.010912  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:51.030680  457008 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:51.031028  457008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1025 09:54:51.031045  457008 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:54:51.031643  457008 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57398->127.0.0.1:33255: read: connection reset by peer
	I1025 09:54:54.174504  457008 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-846915
	
	I1025 09:54:54.174532  457008 ubuntu.go:182] provisioning hostname "embed-certs-846915"
	I1025 09:54:54.174596  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:54.193572  457008 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:54.193807  457008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1025 09:54:54.193820  457008 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-846915 && echo "embed-certs-846915" | sudo tee /etc/hostname
	I1025 09:54:54.343404  457008 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-846915
	
	I1025 09:54:54.343512  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:54.361545  457008 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:54.361766  457008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1025 09:54:54.361784  457008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-846915' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-846915/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-846915' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:54:54.501002  457008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:54:54.501029  457008 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 09:54:54.501072  457008 ubuntu.go:190] setting up certificates
	I1025 09:54:54.501087  457008 provision.go:84] configureAuth start
	I1025 09:54:54.501144  457008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-846915
	I1025 09:54:54.519513  457008 provision.go:143] copyHostCerts
	I1025 09:54:54.519592  457008 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem, removing ...
	I1025 09:54:54.519607  457008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:54:54.519682  457008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 09:54:54.519809  457008 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem, removing ...
	I1025 09:54:54.519821  457008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:54:54.519850  457008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 09:54:54.519924  457008 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem, removing ...
	I1025 09:54:54.519931  457008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:54:54.519959  457008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 09:54:54.520024  457008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.embed-certs-846915 san=[127.0.0.1 192.168.103.2 embed-certs-846915 localhost minikube]
	I1025 09:54:54.903702  457008 provision.go:177] copyRemoteCerts
	I1025 09:54:54.903771  457008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:54:54.903818  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:54.921801  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:55.047195  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:54:55.066909  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 09:54:55.085856  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:54:55.103394  457008 provision.go:87] duration metric: took 602.287274ms to configureAuth
	I1025 09:54:55.103426  457008 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:54:55.103621  457008 config.go:182] Loaded profile config "embed-certs-846915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:55.103746  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:55.122301  457008 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:55.122561  457008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1025 09:54:55.122584  457008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:54:55.479695  457008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:54:55.479723  457008 machine.go:96] duration metric: took 4.468883425s to provisionDockerMachine
	I1025 09:54:55.479736  457008 start.go:293] postStartSetup for "embed-certs-846915" (driver="docker")
	I1025 09:54:55.479750  457008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:54:55.479835  457008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:54:55.479894  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:55.498185  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:55.601303  457008 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:54:55.605265  457008 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:54:55.605300  457008 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:54:55.605314  457008 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 09:54:55.605388  457008 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 09:54:55.605478  457008 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> 1341452.pem in /etc/ssl/certs
	I1025 09:54:55.605582  457008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:54:55.614105  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:54:55.632538  457008 start.go:296] duration metric: took 152.784026ms for postStartSetup
	I1025 09:54:55.632624  457008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:54:55.632678  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:55.655070  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:55.753771  457008 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:54:55.758537  457008 fix.go:56] duration metric: took 5.07159091s for fixHost
	I1025 09:54:55.758571  457008 start.go:83] releasing machines lock for "embed-certs-846915", held for 5.07165484s
	I1025 09:54:55.758657  457008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-846915
	I1025 09:54:55.776411  457008 ssh_runner.go:195] Run: cat /version.json
	I1025 09:54:55.776457  457008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:54:55.776489  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:55.776531  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:55.796671  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:55.796898  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:55.952166  457008 ssh_runner.go:195] Run: systemctl --version
	I1025 09:54:55.959161  457008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:54:55.995157  457008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:54:56.000389  457008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:54:56.000452  457008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:54:56.009221  457008 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:54:56.009247  457008 start.go:495] detecting cgroup driver to use...
	I1025 09:54:56.009282  457008 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:54:56.009336  457008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:54:56.023779  457008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:54:56.037986  457008 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:54:56.038049  457008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:54:56.054727  457008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:54:56.068786  457008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:54:56.162705  457008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:54:56.244217  457008 docker.go:234] disabling docker service ...
	I1025 09:54:56.244284  457008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:54:56.258520  457008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:54:56.271621  457008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:54:56.349740  457008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:54:56.432747  457008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:54:56.444975  457008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:54:56.459162  457008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:54:56.459221  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.468059  457008 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:54:56.468118  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.477045  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.485501  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.493858  457008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:54:56.501638  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.510445  457008 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.519270  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.528402  457008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:54:56.536827  457008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:54:56.544264  457008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:56.623484  457008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:54:56.736429  457008 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:54:56.736491  457008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:54:56.740613  457008 start.go:563] Will wait 60s for crictl version
	I1025 09:54:56.740677  457008 ssh_runner.go:195] Run: which crictl
	I1025 09:54:56.744278  457008 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:54:56.768009  457008 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:54:56.768081  457008 ssh_runner.go:195] Run: crio --version
	I1025 09:54:56.795678  457008 ssh_runner.go:195] Run: crio --version
	I1025 09:54:56.824108  457008 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:54:56.825165  457008 cli_runner.go:164] Run: docker network inspect embed-certs-846915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:54:56.842297  457008 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 09:54:56.847046  457008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:54:56.857067  457008 kubeadm.go:883] updating cluster {Name:embed-certs-846915 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:54:56.857171  457008 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:54:56.857214  457008 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:54:56.888963  457008 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:54:56.888988  457008 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:54:56.889036  457008 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:54:56.915006  457008 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:54:56.915029  457008 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:54:56.915037  457008 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1025 09:54:56.915134  457008 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-846915 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:54:56.915198  457008 ssh_runner.go:195] Run: crio config
	I1025 09:54:56.960405  457008 cni.go:84] Creating CNI manager for ""
	I1025 09:54:56.960425  457008 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:54:56.960446  457008 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:54:56.960476  457008 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-846915 NodeName:embed-certs-846915 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:54:56.960649  457008 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-846915"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:54:56.960737  457008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:54:56.968913  457008 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:54:56.968987  457008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:54:56.976772  457008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1025 09:54:56.989175  457008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:54:57.001654  457008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1025 09:54:57.014581  457008 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:54:57.018476  457008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:54:57.028738  457008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:57.108359  457008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:54:57.134919  457008 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915 for IP: 192.168.103.2
	I1025 09:54:57.134944  457008 certs.go:195] generating shared ca certs ...
	I1025 09:54:57.134965  457008 certs.go:227] acquiring lock for ca certs: {Name:mk84f00dc0ba6e3a6eb84ff47b0ea60692217fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:57.135148  457008 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key
	I1025 09:54:57.135208  457008 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key
	I1025 09:54:57.135221  457008 certs.go:257] generating profile certs ...
	I1025 09:54:57.135321  457008 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/client.key
	I1025 09:54:57.135400  457008 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/apiserver.key.b5da4f55
	I1025 09:54:57.135449  457008 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/proxy-client.key
	I1025 09:54:57.135591  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem (1338 bytes)
	W1025 09:54:57.135636  457008 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145_empty.pem, impossibly tiny 0 bytes
	I1025 09:54:57.135649  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:54:57.135684  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:54:57.135715  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:54:57.135746  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem (1675 bytes)
	I1025 09:54:57.135817  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:54:57.136711  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:54:57.156186  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:54:57.174513  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:54:57.194100  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:54:57.219083  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 09:54:57.237565  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:54:57.254763  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:54:57.272283  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 09:54:57.289481  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:54:57.306704  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem --> /usr/share/ca-certificates/134145.pem (1338 bytes)
	I1025 09:54:57.323681  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /usr/share/ca-certificates/1341452.pem (1708 bytes)
	I1025 09:54:57.341494  457008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:54:57.353846  457008 ssh_runner.go:195] Run: openssl version
	I1025 09:54:57.359964  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:54:57.368508  457008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:57.372486  457008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:59 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:57.372540  457008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:57.408024  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:54:57.416387  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134145.pem && ln -fs /usr/share/ca-certificates/134145.pem /etc/ssl/certs/134145.pem"
	I1025 09:54:57.424628  457008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134145.pem
	I1025 09:54:57.428201  457008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:05 /usr/share/ca-certificates/134145.pem
	I1025 09:54:57.428248  457008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134145.pem
	I1025 09:54:57.462175  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134145.pem /etc/ssl/certs/51391683.0"
	I1025 09:54:57.470726  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1341452.pem && ln -fs /usr/share/ca-certificates/1341452.pem /etc/ssl/certs/1341452.pem"
	I1025 09:54:57.479469  457008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1341452.pem
	I1025 09:54:57.483150  457008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:05 /usr/share/ca-certificates/1341452.pem
	I1025 09:54:57.483201  457008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1341452.pem
	I1025 09:54:57.516984  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1341452.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:54:57.525156  457008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:54:57.529436  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:54:57.564653  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:54:57.599517  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:54:57.635935  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:54:57.682235  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:54:57.722478  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:54:57.771292  457008 kubeadm.go:400] StartCluster: {Name:embed-certs-846915 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:57.771403  457008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:54:57.771468  457008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:54:57.809369  457008 cri.go:89] found id: "46c544af25ffafca1d729eb37ffa1959807879d6234f84e37186f47588ac6ec9"
	I1025 09:54:57.809404  457008 cri.go:89] found id: "007e89b7baf40445b09598af39cfba319acdf11728b62f56a4aaf210995d2127"
	I1025 09:54:57.809410  457008 cri.go:89] found id: "48b644dd8de53c8507fceecb6ceae794c15a6e4bfda24197562f2d2226ed7a7a"
	I1025 09:54:57.809414  457008 cri.go:89] found id: "1a49d21a7ef6b31c7d183bb24b6647a09b20b673fd98b5105086202f5e9caed0"
	I1025 09:54:57.809418  457008 cri.go:89] found id: ""
	I1025 09:54:57.809467  457008 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:54:57.823074  457008 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:54:57Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:54:57.823150  457008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:54:57.831663  457008 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:54:57.831683  457008 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:54:57.831729  457008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:54:57.839555  457008 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:54:57.840254  457008 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-846915" does not appear in /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:57.840583  457008 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-130604/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-846915" cluster setting kubeconfig missing "embed-certs-846915" context setting]
	I1025 09:54:57.841162  457008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:57.842882  457008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:54:57.850861  457008 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1025 09:54:57.850898  457008 kubeadm.go:601] duration metric: took 19.208602ms to restartPrimaryControlPlane
	I1025 09:54:57.850908  457008 kubeadm.go:402] duration metric: took 79.623638ms to StartCluster
	I1025 09:54:57.850925  457008 settings.go:142] acquiring lock: {Name:mke1e64be0ec6edf2eef6e52eb10d83b59bb8c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:57.850990  457008 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:57.852542  457008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:57.852799  457008 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:54:57.852875  457008 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:54:57.852996  457008 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-846915"
	I1025 09:54:57.853021  457008 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-846915"
	W1025 09:54:57.853035  457008 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:54:57.853054  457008 addons.go:69] Setting dashboard=true in profile "embed-certs-846915"
	I1025 09:54:57.853065  457008 config.go:182] Loaded profile config "embed-certs-846915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:57.853079  457008 addons.go:238] Setting addon dashboard=true in "embed-certs-846915"
	I1025 09:54:57.853067  457008 addons.go:69] Setting default-storageclass=true in profile "embed-certs-846915"
	W1025 09:54:57.853093  457008 addons.go:247] addon dashboard should already be in state true
	I1025 09:54:57.853104  457008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-846915"
	I1025 09:54:57.853063  457008 host.go:66] Checking if "embed-certs-846915" exists ...
	I1025 09:54:57.853128  457008 host.go:66] Checking if "embed-certs-846915" exists ...
	I1025 09:54:57.853457  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:57.853571  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:57.853627  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:57.855906  457008 out.go:179] * Verifying Kubernetes components...
	I1025 09:54:57.857196  457008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:57.879929  457008 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:54:57.879948  457008 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 09:54:57.881026  457008 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:54:57.881043  457008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:54:57.881074  457008 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1025 09:54:55.549837  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:54:57.550264  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	I1025 09:54:57.881097  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:57.881717  457008 addons.go:238] Setting addon default-storageclass=true in "embed-certs-846915"
	W1025 09:54:57.881738  457008 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:54:57.881767  457008 host.go:66] Checking if "embed-certs-846915" exists ...
	I1025 09:54:57.882197  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 09:54:57.882215  457008 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 09:54:57.882233  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:57.882272  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:57.912925  457008 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:54:57.912955  457008 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:54:57.913022  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:57.914868  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:57.916299  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:57.937956  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:57.998037  457008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:54:58.013908  457008 node_ready.go:35] waiting up to 6m0s for node "embed-certs-846915" to be "Ready" ...
	I1025 09:54:58.030429  457008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:54:58.035735  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 09:54:58.035760  457008 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 09:54:58.055893  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 09:54:58.055921  457008 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 09:54:58.057225  457008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:54:58.072489  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 09:54:58.072523  457008 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 09:54:58.091219  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 09:54:58.091239  457008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 09:54:58.108519  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 09:54:58.108542  457008 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 09:54:58.122900  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 09:54:58.122930  457008 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 09:54:58.135662  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 09:54:58.135688  457008 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 09:54:58.148215  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 09:54:58.148239  457008 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 09:54:58.160869  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:54:58.160896  457008 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 09:54:58.173696  457008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:54:59.994021  457008 node_ready.go:49] node "embed-certs-846915" is "Ready"
	I1025 09:54:59.994059  457008 node_ready.go:38] duration metric: took 1.980116383s for node "embed-certs-846915" to be "Ready" ...
	I1025 09:54:59.994078  457008 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:54:59.994133  457008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:55:00.524810  457008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.494340014s)
	I1025 09:55:00.524885  457008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.467548938s)
	I1025 09:55:00.525043  457008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.35130278s)
	I1025 09:55:00.525304  457008 api_server.go:72] duration metric: took 2.672474172s to wait for apiserver process to appear ...
	I1025 09:55:00.525323  457008 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:55:00.525339  457008 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:55:00.527109  457008 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-846915 addons enable metrics-server
	
	I1025 09:55:00.530790  457008 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:55:00.530823  457008 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:55:00.541399  457008 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1025 09:54:59.550820  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:55:02.050441  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	I1025 09:55:00.543335  457008 addons.go:514] duration metric: took 2.690467088s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1025 09:55:01.025434  457008 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:55:01.029928  457008 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:55:01.029957  457008 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:55:01.525569  457008 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:55:01.530405  457008 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1025 09:55:01.531317  457008 api_server.go:141] control plane version: v1.34.1
	I1025 09:55:01.531342  457008 api_server.go:131] duration metric: took 1.00601266s to wait for apiserver health ...
	I1025 09:55:01.531364  457008 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:55:01.534517  457008 system_pods.go:59] 8 kube-system pods found
	I1025 09:55:01.534557  457008 system_pods.go:61] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:55:01.534571  457008 system_pods.go:61] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:55:01.534580  457008 system_pods.go:61] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:55:01.534586  457008 system_pods.go:61] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:55:01.534594  457008 system_pods.go:61] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:55:01.534601  457008 system_pods.go:61] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:55:01.534607  457008 system_pods.go:61] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:55:01.534612  457008 system_pods.go:61] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Running
	I1025 09:55:01.534619  457008 system_pods.go:74] duration metric: took 3.248397ms to wait for pod list to return data ...
	I1025 09:55:01.534630  457008 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:55:01.537060  457008 default_sa.go:45] found service account: "default"
	I1025 09:55:01.537080  457008 default_sa.go:55] duration metric: took 2.439904ms for default service account to be created ...
	I1025 09:55:01.537090  457008 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:55:01.539504  457008 system_pods.go:86] 8 kube-system pods found
	I1025 09:55:01.539542  457008 system_pods.go:89] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:55:01.539555  457008 system_pods.go:89] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:55:01.539567  457008 system_pods.go:89] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:55:01.539579  457008 system_pods.go:89] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:55:01.539592  457008 system_pods.go:89] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:55:01.539604  457008 system_pods.go:89] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:55:01.539623  457008 system_pods.go:89] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:55:01.539632  457008 system_pods.go:89] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Running
	I1025 09:55:01.539642  457008 system_pods.go:126] duration metric: took 2.545561ms to wait for k8s-apps to be running ...
	I1025 09:55:01.539655  457008 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:55:01.539709  457008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:55:01.553256  457008 system_svc.go:56] duration metric: took 13.59133ms WaitForService to wait for kubelet
	I1025 09:55:01.553280  457008 kubeadm.go:586] duration metric: took 3.700453295s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:55:01.553307  457008 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:55:01.556207  457008 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:55:01.556239  457008 node_conditions.go:123] node cpu capacity is 8
	I1025 09:55:01.556252  457008 node_conditions.go:105] duration metric: took 2.940915ms to run NodePressure ...
	I1025 09:55:01.556266  457008 start.go:241] waiting for startup goroutines ...
	I1025 09:55:01.556272  457008 start.go:246] waiting for cluster config update ...
	I1025 09:55:01.556281  457008 start.go:255] writing updated cluster config ...
	I1025 09:55:01.556546  457008 ssh_runner.go:195] Run: rm -f paused
	I1025 09:55:01.560261  457008 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:55:01.563470  457008 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4w68k" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:55:03.568631  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:04.550637  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:55:07.049223  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	I1025 09:55:08.549788  449952 pod_ready.go:94] pod "coredns-66bc5c9577-29ltg" is "Ready"
	I1025 09:55:08.549821  449952 pod_ready.go:86] duration metric: took 38.005597851s for pod "coredns-66bc5c9577-29ltg" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.552948  449952 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.557263  449952 pod_ready.go:94] pod "etcd-default-k8s-diff-port-880773" is "Ready"
	I1025 09:55:08.557290  449952 pod_ready.go:86] duration metric: took 4.316609ms for pod "etcd-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.559329  449952 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.562970  449952 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-880773" is "Ready"
	I1025 09:55:08.562995  449952 pod_ready.go:86] duration metric: took 3.629414ms for pod "kube-apiserver-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.564977  449952 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.748757  449952 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-880773" is "Ready"
	I1025 09:55:08.748792  449952 pod_ready.go:86] duration metric: took 183.792651ms for pod "kube-controller-manager-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.948726  449952 pod_ready.go:83] waiting for pod "kube-proxy-bg94v" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:09.347710  449952 pod_ready.go:94] pod "kube-proxy-bg94v" is "Ready"
	I1025 09:55:09.347744  449952 pod_ready.go:86] duration metric: took 398.987622ms for pod "kube-proxy-bg94v" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:09.548542  449952 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:09.947051  449952 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-880773" is "Ready"
	I1025 09:55:09.947079  449952 pod_ready.go:86] duration metric: took 398.50407ms for pod "kube-scheduler-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:09.947091  449952 pod_ready.go:40] duration metric: took 39.406100171s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:55:09.990440  449952 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:55:10.024224  449952 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-880773" cluster and "default" namespace by default
	W1025 09:55:05.569905  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:07.571127  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:10.069750  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:12.569719  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:15.068937  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:17.569445  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:20.069705  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 25 09:54:40 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:40.495968899Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:54:40 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:40.499326483Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:54:40 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:40.49936112Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:54:53 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:53.536325753Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e173cab8-5a6d-4c3b-be4d-dea38ae2663a name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:53 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:53.539846838Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1862bcef-72b1-4286-9f62-826ad93b3aac name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:53 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:53.542923783Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d/dashboard-metrics-scraper" id=19234b2e-2171-48b9-99d5-244fb49846a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:53 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:53.54313385Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:53 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:53.550431127Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:53 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:53.550862145Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:53 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:53.582139057Z" level=info msg="Created container ea46cdad815525a44a51551be277a793ef57b7528437ff46dd03f0c81c0b0609: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d/dashboard-metrics-scraper" id=19234b2e-2171-48b9-99d5-244fb49846a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:53 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:53.586564498Z" level=info msg="Starting container: ea46cdad815525a44a51551be277a793ef57b7528437ff46dd03f0c81c0b0609" id=e081a8a0-1679-441d-8a5f-1a4c87080c7c name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:54:53 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:53.590388538Z" level=info msg="Started container" PID=1757 containerID=ea46cdad815525a44a51551be277a793ef57b7528437ff46dd03f0c81c0b0609 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d/dashboard-metrics-scraper id=e081a8a0-1679-441d-8a5f-1a4c87080c7c name=/runtime.v1.RuntimeService/StartContainer sandboxID=6e63ff4d972ed89ff8936c8d5c78b31245c823addc7956f53298ed8029d1219e
	Oct 25 09:54:54 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:54.64679347Z" level=info msg="Removing container: 91a551b2985d532ac32cefe782bd2c1da14d8aeb092e7e7e1cd2d85eb9f8e473" id=f84ef147-1345-41ff-982f-ad7b114dfdab name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:54:54 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:54.658592283Z" level=info msg="Removed container 91a551b2985d532ac32cefe782bd2c1da14d8aeb092e7e7e1cd2d85eb9f8e473: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d/dashboard-metrics-scraper" id=f84ef147-1345-41ff-982f-ad7b114dfdab name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.664005017Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d8207ed3-c832-4e29-ade3-a0bc70c9ca30 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.664962092Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=67de2a4e-998c-49ea-9a12-ddf1e4e5a443 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.666105183Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=606a5f03-dc74-45d4-b848-83bb7b3310ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.666233532Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.671396729Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.671597558Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1ac1e7bbb6e8bff98f2b71876db239fcd926b4f9b5c65c09f8e989b2342c0fde/merged/etc/passwd: no such file or directory"
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.67163028Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1ac1e7bbb6e8bff98f2b71876db239fcd926b4f9b5c65c09f8e989b2342c0fde/merged/etc/group: no such file or directory"
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.671888902Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.699621372Z" level=info msg="Created container 8be1b97ae8c99fdf0cfe2552030fe0a0e942bad96c59260acda25fc373b2370f: kube-system/storage-provisioner/storage-provisioner" id=606a5f03-dc74-45d4-b848-83bb7b3310ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.700211474Z" level=info msg="Starting container: 8be1b97ae8c99fdf0cfe2552030fe0a0e942bad96c59260acda25fc373b2370f" id=a3811c51-e2b0-401a-894c-4c488744fd38 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.702008703Z" level=info msg="Started container" PID=1775 containerID=8be1b97ae8c99fdf0cfe2552030fe0a0e942bad96c59260acda25fc373b2370f description=kube-system/storage-provisioner/storage-provisioner id=a3811c51-e2b0-401a-894c-4c488744fd38 name=/runtime.v1.RuntimeService/StartContainer sandboxID=05743631a5c9c5c72671044fea68f435aa81d7b10af29cf143c8a43f21200b52
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	8be1b97ae8c99       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   05743631a5c9c       storage-provisioner                                    kube-system
	ea46cdad81552       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago      Exited              dashboard-metrics-scraper   2                   6e63ff4d972ed       dashboard-metrics-scraper-6ffb444bf9-nv47d             kubernetes-dashboard
	14107879e5563       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   dd0575f9c1896       kubernetes-dashboard-855c9754f9-qzqj5                  kubernetes-dashboard
	ecf6c0ce3401b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   813d3d1b088ac       busybox                                                default
	9e0ebd1eedf1b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   1e661c8a5cf2f       coredns-66bc5c9577-29ltg                               kube-system
	e5dc0927bdb5b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   05743631a5c9c       storage-provisioner                                    kube-system
	7d1412ad484fd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   5f51bb4d28158       kindnet-cnqn8                                          kube-system
	3fb115552602e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   4d4d94853e970       kube-proxy-bg94v                                       kube-system
	8a40c30412194       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           58 seconds ago      Running             etcd                        0                   109744b0dbcf6       etcd-default-k8s-diff-port-880773                      kube-system
	1099e940dc59e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           58 seconds ago      Running             kube-apiserver              0                   cd393076c409e       kube-apiserver-default-k8s-diff-port-880773            kube-system
	b7360eb6624b8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           58 seconds ago      Running             kube-controller-manager     0                   08b6a6a853a56       kube-controller-manager-default-k8s-diff-port-880773   kube-system
	9a7e2aef555d4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           58 seconds ago      Running             kube-scheduler              0                   0a2f607f7d690       kube-scheduler-default-k8s-diff-port-880773            kube-system
	
	
	==> coredns [9e0ebd1eedf1bfec2ce3bb2e23264d77a78263d4507d8318f07c179eaf43ef90] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51534 - 39285 "HINFO IN 4144776211950941106.7961538770481254160. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.070766889s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-880773
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-880773
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=default-k8s-diff-port-880773
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_53_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:52:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-880773
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:55:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:55:10 +0000   Sat, 25 Oct 2025 09:52:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:55:10 +0000   Sat, 25 Oct 2025 09:52:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:55:10 +0000   Sat, 25 Oct 2025 09:52:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:55:10 +0000   Sat, 25 Oct 2025 09:53:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-880773
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                0255f5ba-c095-4977-bf24-556780863944
	  Boot ID:                    69cac88c-fbae-449a-9884-8eb99653f5b9
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-29ltg                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m21s
	  kube-system                 etcd-default-k8s-diff-port-880773                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m26s
	  kube-system                 kindnet-cnqn8                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-880773             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-880773    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-bg94v                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-default-k8s-diff-port-880773             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-nv47d              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qzqj5                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m19s                  kube-proxy       
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  Starting                 2m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m31s (x8 over 2m31s)  kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m31s (x8 over 2m31s)  kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m31s (x8 over 2m31s)  kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m26s                  kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m26s                  kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m26s                  kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m26s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m22s                  node-controller  Node default-k8s-diff-port-880773 event: Registered Node default-k8s-diff-port-880773 in Controller
	  Normal  NodeReady                100s                   kubelet          Node default-k8s-diff-port-880773 status is now: NodeReady
	  Normal  Starting                 59s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)      kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)      kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)      kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                    node-controller  Node default-k8s-diff-port-880773 event: Registered Node default-k8s-diff-port-880773 in Controller
	
	
	==> dmesg <==
	[  +0.000024] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[Oct25 09:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[ +17.952906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 b8 8e e3 56 c9 08 06
	[  +0.000656] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[Oct25 09:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	[ +20.335832] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +1.293644] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[Oct25 09:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 68 92 7c c6 14 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +0.270958] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a d0 7b 0e 4a 8d 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[ +10.676024] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000020] ll header: 00000000: ff ff ff ff ff ff 1a 10 31 a9 02 ae 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	
	
	==> etcd [8a40c304121945c99334f375a4fc8f1073390b82cca6a44c6e2b224a5804ed43] <==
	{"level":"warn","ts":"2025-10-25T09:54:28.396295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.404966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.411783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.419398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.425850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.432232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.438384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.447781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.455720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.461909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.471385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.478333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.484478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.490774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.498159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.504819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.510942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.517342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.523392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.529548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.536185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.553018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.559187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.565752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.609226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44288","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:55:25 up  1:37,  0 user,  load average: 3.74, 4.31, 2.81
	Linux default-k8s-diff-port-880773 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7d1412ad484fd280b8e475edb38e636d8b265e528fb5edc4c49694d11aa74026] <==
	I1025 09:54:30.178851       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:54:30.179168       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1025 09:54:30.179377       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:54:30.179394       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:54:30.179420       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:54:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:54:30.478871       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:54:30.479465       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:54:30.479481       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:54:30.479691       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:54:30.978642       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:54:30.978686       1 metrics.go:72] Registering metrics
	I1025 09:54:30.978784       1 controller.go:711] "Syncing nftables rules"
	I1025 09:54:40.478954       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:54:40.479008       1 main.go:301] handling current node
	I1025 09:54:50.483464       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:54:50.483504       1 main.go:301] handling current node
	I1025 09:55:00.478496       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:55:00.478524       1 main.go:301] handling current node
	I1025 09:55:10.479448       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:55:10.479499       1 main.go:301] handling current node
	I1025 09:55:20.481150       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:55:20.481184       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1099e940dc59e4a7fc6edf4f82c427fc4633cbc73d1759f0ef430fccd002219f] <==
	I1025 09:54:29.078487       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:54:29.078498       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 09:54:29.078600       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 09:54:29.078608       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:54:29.079185       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 09:54:29.079264       1 aggregator.go:171] initial CRD sync complete...
	I1025 09:54:29.079280       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 09:54:29.079288       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:54:29.079294       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:54:29.080363       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 09:54:29.080452       1 policy_source.go:240] refreshing policies
	I1025 09:54:29.084224       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:54:29.091647       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:54:29.117318       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:54:29.324152       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:54:29.358183       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:54:29.377436       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:54:29.384173       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:54:29.391587       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:54:29.426528       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.217.97"}
	I1025 09:54:29.437908       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.156.231"}
	I1025 09:54:29.982853       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:54:32.420411       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:54:32.862493       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:54:33.019384       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b7360eb6624b8284557553c607130a8087e3690512dcc9caea4351f9f876fd02] <==
	I1025 09:54:32.409696       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:54:32.409829       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 09:54:32.409857       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:54:32.410315       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:54:32.410398       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:54:32.410450       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:54:32.410483       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:54:32.410493       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:54:32.410811       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:54:32.410936       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-880773"
	I1025 09:54:32.410985       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 09:54:32.411077       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:54:32.411403       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:54:32.411773       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:54:32.413124       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:54:32.413142       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:54:32.413151       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:54:32.413974       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:54:32.414172       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:54:32.417043       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:54:32.418999       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:54:32.419991       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:54:32.422430       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:54:32.423624       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 09:54:32.427392       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [3fb115552602e82341d7e2918cd812563ad3b933adfcf256e50f6b6234235080] <==
	I1025 09:54:29.938497       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:54:30.010914       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:54:30.111569       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:54:30.111621       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1025 09:54:30.111755       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:54:30.135072       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:54:30.135140       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:54:30.141657       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:54:30.142104       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:54:30.142142       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:54:30.143710       1 config.go:200] "Starting service config controller"
	I1025 09:54:30.143728       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:54:30.143885       1 config.go:309] "Starting node config controller"
	I1025 09:54:30.143975       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:54:30.144159       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:54:30.144182       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:54:30.144219       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:54:30.144274       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:54:30.243913       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:54:30.245089       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:54:30.245088       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:54:30.245121       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9a7e2aef555d4452a0b73ff6d39e556aaf40affe43c7adcaf8fc119b3910c298] <==
	I1025 09:54:28.059483       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:54:29.002478       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:54:29.002518       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:54:29.002549       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:54:29.002559       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:54:29.036116       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:54:29.036148       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:54:29.038194       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:54:29.038482       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:54:29.038689       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:54:29.038801       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:54:29.139887       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:54:33 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:33.042223     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0212e92d-87ca-454e-be84-4baeec2893ff-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-nv47d\" (UID: \"0212e92d-87ca-454e-be84-4baeec2893ff\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d"
	Oct 25 09:54:35 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:35.594616     723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d" podStartSLOduration=1.394952657 podStartE2EDuration="3.594593206s" podCreationTimestamp="2025-10-25 09:54:32 +0000 UTC" firstStartedPulling="2025-10-25 09:54:33.308511987 +0000 UTC m=+6.861223472" lastFinishedPulling="2025-10-25 09:54:35.508152527 +0000 UTC m=+9.060864021" observedRunningTime="2025-10-25 09:54:35.594468653 +0000 UTC m=+9.147180152" watchObservedRunningTime="2025-10-25 09:54:35.594593206 +0000 UTC m=+9.147304705"
	Oct 25 09:54:36 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:36.587976     723 scope.go:117] "RemoveContainer" containerID="077b9fd8a06d1f7c2cf431734be80bbaeaa924d81ae6289e965eb0db669af618"
	Oct 25 09:54:37 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:37.593255     723 scope.go:117] "RemoveContainer" containerID="077b9fd8a06d1f7c2cf431734be80bbaeaa924d81ae6289e965eb0db669af618"
	Oct 25 09:54:37 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:37.593638     723 scope.go:117] "RemoveContainer" containerID="91a551b2985d532ac32cefe782bd2c1da14d8aeb092e7e7e1cd2d85eb9f8e473"
	Oct 25 09:54:37 default-k8s-diff-port-880773 kubelet[723]: E1025 09:54:37.593961     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nv47d_kubernetes-dashboard(0212e92d-87ca-454e-be84-4baeec2893ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d" podUID="0212e92d-87ca-454e-be84-4baeec2893ff"
	Oct 25 09:54:38 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:38.417926     723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 09:54:38 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:38.599321     723 scope.go:117] "RemoveContainer" containerID="91a551b2985d532ac32cefe782bd2c1da14d8aeb092e7e7e1cd2d85eb9f8e473"
	Oct 25 09:54:38 default-k8s-diff-port-880773 kubelet[723]: E1025 09:54:38.599546     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nv47d_kubernetes-dashboard(0212e92d-87ca-454e-be84-4baeec2893ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d" podUID="0212e92d-87ca-454e-be84-4baeec2893ff"
	Oct 25 09:54:39 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:39.604557     723 scope.go:117] "RemoveContainer" containerID="91a551b2985d532ac32cefe782bd2c1da14d8aeb092e7e7e1cd2d85eb9f8e473"
	Oct 25 09:54:39 default-k8s-diff-port-880773 kubelet[723]: E1025 09:54:39.604734     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nv47d_kubernetes-dashboard(0212e92d-87ca-454e-be84-4baeec2893ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d" podUID="0212e92d-87ca-454e-be84-4baeec2893ff"
	Oct 25 09:54:40 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:40.475925     723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qzqj5" podStartSLOduration=2.366601028 podStartE2EDuration="8.475900344s" podCreationTimestamp="2025-10-25 09:54:32 +0000 UTC" firstStartedPulling="2025-10-25 09:54:33.313062202 +0000 UTC m=+6.865773683" lastFinishedPulling="2025-10-25 09:54:39.422361513 +0000 UTC m=+12.975072999" observedRunningTime="2025-10-25 09:54:39.619195385 +0000 UTC m=+13.171906885" watchObservedRunningTime="2025-10-25 09:54:40.475900344 +0000 UTC m=+14.028611839"
	Oct 25 09:54:53 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:53.535743     723 scope.go:117] "RemoveContainer" containerID="91a551b2985d532ac32cefe782bd2c1da14d8aeb092e7e7e1cd2d85eb9f8e473"
	Oct 25 09:54:54 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:54.645361     723 scope.go:117] "RemoveContainer" containerID="91a551b2985d532ac32cefe782bd2c1da14d8aeb092e7e7e1cd2d85eb9f8e473"
	Oct 25 09:54:54 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:54.645581     723 scope.go:117] "RemoveContainer" containerID="ea46cdad815525a44a51551be277a793ef57b7528437ff46dd03f0c81c0b0609"
	Oct 25 09:54:54 default-k8s-diff-port-880773 kubelet[723]: E1025 09:54:54.645778     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nv47d_kubernetes-dashboard(0212e92d-87ca-454e-be84-4baeec2893ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d" podUID="0212e92d-87ca-454e-be84-4baeec2893ff"
	Oct 25 09:54:58 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:58.582833     723 scope.go:117] "RemoveContainer" containerID="ea46cdad815525a44a51551be277a793ef57b7528437ff46dd03f0c81c0b0609"
	Oct 25 09:54:58 default-k8s-diff-port-880773 kubelet[723]: E1025 09:54:58.583128     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nv47d_kubernetes-dashboard(0212e92d-87ca-454e-be84-4baeec2893ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d" podUID="0212e92d-87ca-454e-be84-4baeec2893ff"
	Oct 25 09:55:00 default-k8s-diff-port-880773 kubelet[723]: I1025 09:55:00.663586     723 scope.go:117] "RemoveContainer" containerID="e5dc0927bdb5b9abca88e2f1181fcab28f3d98593d8c76d8d66e67df6c8841e7"
	Oct 25 09:55:13 default-k8s-diff-port-880773 kubelet[723]: I1025 09:55:13.535686     723 scope.go:117] "RemoveContainer" containerID="ea46cdad815525a44a51551be277a793ef57b7528437ff46dd03f0c81c0b0609"
	Oct 25 09:55:13 default-k8s-diff-port-880773 kubelet[723]: E1025 09:55:13.535961     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nv47d_kubernetes-dashboard(0212e92d-87ca-454e-be84-4baeec2893ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d" podUID="0212e92d-87ca-454e-be84-4baeec2893ff"
	Oct 25 09:55:22 default-k8s-diff-port-880773 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:55:22 default-k8s-diff-port-880773 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:55:22 default-k8s-diff-port-880773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 09:55:22 default-k8s-diff-port-880773 systemd[1]: kubelet.service: Consumed 1.807s CPU time.
	
	
	==> kubernetes-dashboard [14107879e5563ed6b5a7c822a1deb19829cc37e77da237976440a7dadb7144c1] <==
	2025/10/25 09:54:39 Starting overwatch
	2025/10/25 09:54:39 Using namespace: kubernetes-dashboard
	2025/10/25 09:54:39 Using in-cluster config to connect to apiserver
	2025/10/25 09:54:39 Using secret token for csrf signing
	2025/10/25 09:54:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:54:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:54:39 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 09:54:39 Generating JWE encryption key
	2025/10/25 09:54:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:54:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:54:39 Initializing JWE encryption key from synchronized object
	2025/10/25 09:54:39 Creating in-cluster Sidecar client
	2025/10/25 09:54:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:54:39 Serving insecurely on HTTP port: 9090
	2025/10/25 09:55:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [8be1b97ae8c99fdf0cfe2552030fe0a0e942bad96c59260acda25fc373b2370f] <==
	I1025 09:55:00.713657       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:55:00.721067       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:55:00.721112       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:55:00.723394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:04.178861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:08.441046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:12.038863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:15.092572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:18.114879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:18.119220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:55:18.119404       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:55:18.119553       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6a6b94ce-f3db-45ae-b74d-db800648c1d4", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-880773_bdc812ea-4041-4a08-b507-90e42a7f24d7 became leader
	I1025 09:55:18.119600       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-880773_bdc812ea-4041-4a08-b507-90e42a7f24d7!
	W1025 09:55:18.121288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:18.126123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:55:18.219935       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-880773_bdc812ea-4041-4a08-b507-90e42a7f24d7!
	W1025 09:55:20.129091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:20.133398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:22.136988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:22.140876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:24.143747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:24.147906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e5dc0927bdb5b9abca88e2f1181fcab28f3d98593d8c76d8d66e67df6c8841e7] <==
	I1025 09:54:29.918057       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:54:59.923822       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-880773 -n default-k8s-diff-port-880773
E1025 09:55:25.717972  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/calico-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-880773 -n default-k8s-diff-port-880773: exit status 2 (329.063122ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-880773 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-880773
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-880773:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97",
	        "Created": "2025-10-25T09:52:38.521061713Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 450164,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:54:19.524408975Z",
	            "FinishedAt": "2025-10-25T09:54:17.915408663Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97/hosts",
	        "LogPath": "/var/lib/docker/containers/9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97/9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97-json.log",
	        "Name": "/default-k8s-diff-port-880773",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-880773:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-880773",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9f0bdf9b54bd04758525ac8cb58b50f945c7580a4d2acc85415da84d2f5dca97",
	                "LowerDir": "/var/lib/docker/overlay2/7406dd3ccf074a8c0d63e89c8d8fb56dbbf724c2e72ef4e5d3645a687d36caae-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7406dd3ccf074a8c0d63e89c8d8fb56dbbf724c2e72ef4e5d3645a687d36caae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7406dd3ccf074a8c0d63e89c8d8fb56dbbf724c2e72ef4e5d3645a687d36caae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7406dd3ccf074a8c0d63e89c8d8fb56dbbf724c2e72ef4e5d3645a687d36caae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-880773",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-880773/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-880773",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-880773",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-880773",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b5314118c188bd018d1fd204973f5ec858cb3018723d9cd564ceb6c9182c96fc",
	            "SandboxKey": "/var/run/docker/netns/b5314118c188",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33250"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33251"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33254"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33252"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33253"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-880773": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:d1:a4:a2:10:0a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6ddf7a97662fac8be0712f15b409763064fa73f60cb64be86aabc92b884c53a0",
	                    "EndpointID": "a3ac336efcccfdcae507230cd1f042d3b5e1e89d3d13ed261a2cfba053ff06c9",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-880773",
	                        "9f0bdf9b54bd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-880773 -n default-k8s-diff-port-880773
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-880773 -n default-k8s-diff-port-880773: exit status 2 (329.074251ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-880773 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-880773 logs -n 25: (1.086825545s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-001549                                                                                                                                                                                                               │ disable-driver-mounts-001549 │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p embed-certs-846915 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ stop    │ -p no-preload-656799 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-676314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p old-k8s-version-676314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-880773 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-656799 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p no-preload-656799 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ stop    │ -p default-k8s-diff-port-880773 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-880773 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ start   │ -p default-k8s-diff-port-880773 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-846915 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ stop    │ -p embed-certs-846915 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ image   │ no-preload-656799 image list --format=json                                                                                                                                                                                                    │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ pause   │ -p no-preload-656799 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ delete  │ -p no-preload-656799                                                                                                                                                                                                                          │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ delete  │ -p no-preload-656799                                                                                                                                                                                                                          │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ image   │ old-k8s-version-676314 image list --format=json                                                                                                                                                                                               │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ pause   │ -p old-k8s-version-676314 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-846915 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ start   │ -p embed-certs-846915 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ delete  │ -p old-k8s-version-676314                                                                                                                                                                                                                     │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ delete  │ -p old-k8s-version-676314                                                                                                                                                                                                                     │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ image   │ default-k8s-diff-port-880773 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │ 25 Oct 25 09:55 UTC │
	│ pause   │ -p default-k8s-diff-port-880773 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:54:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:54:50.490480  457008 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:54:50.490778  457008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:50.490791  457008 out.go:374] Setting ErrFile to fd 2...
	I1025 09:54:50.490795  457008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:50.491023  457008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:54:50.491458  457008 out.go:368] Setting JSON to false
	I1025 09:54:50.492784  457008 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5834,"bootTime":1761380256,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:54:50.492874  457008 start.go:141] virtualization: kvm guest
	I1025 09:54:50.494727  457008 out.go:179] * [embed-certs-846915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:54:50.495938  457008 notify.go:220] Checking for updates...
	I1025 09:54:50.495955  457008 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:54:50.497200  457008 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:54:50.498359  457008 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:50.499624  457008 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:54:50.500821  457008 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:54:50.501999  457008 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:54:50.503677  457008 config.go:182] Loaded profile config "embed-certs-846915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:50.504213  457008 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:54:50.529014  457008 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:54:50.529154  457008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:50.591445  457008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-25 09:54:50.580621433 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:50.591560  457008 docker.go:318] overlay module found
	I1025 09:54:50.592851  457008 out.go:179] * Using the docker driver based on existing profile
	I1025 09:54:50.593988  457008 start.go:305] selected driver: docker
	I1025 09:54:50.594007  457008 start.go:925] validating driver "docker" against &{Name:embed-certs-846915 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:50.594132  457008 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:54:50.594767  457008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:50.658713  457008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-25 09:54:50.645802852 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:50.659072  457008 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:54:50.659108  457008 cni.go:84] Creating CNI manager for ""
	I1025 09:54:50.659179  457008 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:54:50.659237  457008 start.go:349] cluster config:
	{Name:embed-certs-846915 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:50.660979  457008 out.go:179] * Starting "embed-certs-846915" primary control-plane node in "embed-certs-846915" cluster
	I1025 09:54:50.662225  457008 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:54:50.663491  457008 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:54:50.664700  457008 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:54:50.664762  457008 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:54:50.664778  457008 cache.go:58] Caching tarball of preloaded images
	I1025 09:54:50.664819  457008 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:54:50.664906  457008 preload.go:233] Found /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:54:50.664923  457008 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:54:50.665060  457008 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/config.json ...
	I1025 09:54:50.686709  457008 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:54:50.686734  457008 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:54:50.686758  457008 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:54:50.686788  457008 start.go:360] acquireMachinesLock for embed-certs-846915: {Name:mk6afaad62774c341d106d1a8d37743a274e5cb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:54:50.686902  457008 start.go:364] duration metric: took 69.005µs to acquireMachinesLock for "embed-certs-846915"
	I1025 09:54:50.686926  457008 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:54:50.686937  457008 fix.go:54] fixHost starting: 
	I1025 09:54:50.687222  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:50.706726  457008 fix.go:112] recreateIfNeeded on embed-certs-846915: state=Stopped err=<nil>
	W1025 09:54:50.706755  457008 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 09:54:50.550561  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:54:53.049954  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	I1025 09:54:50.708166  457008 out.go:252] * Restarting existing docker container for "embed-certs-846915" ...
	I1025 09:54:50.708247  457008 cli_runner.go:164] Run: docker start embed-certs-846915
	I1025 09:54:50.967025  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:50.987855  457008 kic.go:430] container "embed-certs-846915" state is running.
	I1025 09:54:50.988396  457008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-846915
	I1025 09:54:51.010564  457008 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/config.json ...
	I1025 09:54:51.010825  457008 machine.go:93] provisionDockerMachine start ...
	I1025 09:54:51.010912  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:51.030680  457008 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:51.031028  457008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1025 09:54:51.031045  457008 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:54:51.031643  457008 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57398->127.0.0.1:33255: read: connection reset by peer
	I1025 09:54:54.174504  457008 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-846915
	
	I1025 09:54:54.174532  457008 ubuntu.go:182] provisioning hostname "embed-certs-846915"
	I1025 09:54:54.174596  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:54.193572  457008 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:54.193807  457008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1025 09:54:54.193820  457008 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-846915 && echo "embed-certs-846915" | sudo tee /etc/hostname
	I1025 09:54:54.343404  457008 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-846915
	
	I1025 09:54:54.343512  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:54.361545  457008 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:54.361766  457008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1025 09:54:54.361784  457008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-846915' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-846915/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-846915' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:54:54.501002  457008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:54:54.501029  457008 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 09:54:54.501072  457008 ubuntu.go:190] setting up certificates
	I1025 09:54:54.501087  457008 provision.go:84] configureAuth start
	I1025 09:54:54.501144  457008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-846915
	I1025 09:54:54.519513  457008 provision.go:143] copyHostCerts
	I1025 09:54:54.519592  457008 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem, removing ...
	I1025 09:54:54.519607  457008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:54:54.519682  457008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 09:54:54.519809  457008 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem, removing ...
	I1025 09:54:54.519821  457008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:54:54.519850  457008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 09:54:54.519924  457008 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem, removing ...
	I1025 09:54:54.519931  457008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:54:54.519959  457008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 09:54:54.520024  457008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.embed-certs-846915 san=[127.0.0.1 192.168.103.2 embed-certs-846915 localhost minikube]
	I1025 09:54:54.903702  457008 provision.go:177] copyRemoteCerts
	I1025 09:54:54.903771  457008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:54:54.903818  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:54.921801  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:55.047195  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:54:55.066909  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 09:54:55.085856  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:54:55.103394  457008 provision.go:87] duration metric: took 602.287274ms to configureAuth
	I1025 09:54:55.103426  457008 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:54:55.103621  457008 config.go:182] Loaded profile config "embed-certs-846915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:55.103746  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:55.122301  457008 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:55.122561  457008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1025 09:54:55.122584  457008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:54:55.479695  457008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:54:55.479723  457008 machine.go:96] duration metric: took 4.468883425s to provisionDockerMachine
	I1025 09:54:55.479736  457008 start.go:293] postStartSetup for "embed-certs-846915" (driver="docker")
	I1025 09:54:55.479750  457008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:54:55.479835  457008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:54:55.479894  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:55.498185  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:55.601303  457008 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:54:55.605265  457008 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:54:55.605300  457008 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:54:55.605314  457008 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 09:54:55.605388  457008 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 09:54:55.605478  457008 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> 1341452.pem in /etc/ssl/certs
	I1025 09:54:55.605582  457008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:54:55.614105  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:54:55.632538  457008 start.go:296] duration metric: took 152.784026ms for postStartSetup
	I1025 09:54:55.632624  457008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:54:55.632678  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:55.655070  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:55.753771  457008 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:54:55.758537  457008 fix.go:56] duration metric: took 5.07159091s for fixHost
	I1025 09:54:55.758571  457008 start.go:83] releasing machines lock for "embed-certs-846915", held for 5.07165484s
	I1025 09:54:55.758657  457008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-846915
	I1025 09:54:55.776411  457008 ssh_runner.go:195] Run: cat /version.json
	I1025 09:54:55.776457  457008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:54:55.776489  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:55.776531  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:55.796671  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:55.796898  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:55.952166  457008 ssh_runner.go:195] Run: systemctl --version
	I1025 09:54:55.959161  457008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:54:55.995157  457008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:54:56.000389  457008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:54:56.000452  457008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:54:56.009221  457008 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:54:56.009247  457008 start.go:495] detecting cgroup driver to use...
	I1025 09:54:56.009282  457008 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:54:56.009336  457008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:54:56.023779  457008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:54:56.037986  457008 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:54:56.038049  457008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:54:56.054727  457008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:54:56.068786  457008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:54:56.162705  457008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:54:56.244217  457008 docker.go:234] disabling docker service ...
	I1025 09:54:56.244284  457008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:54:56.258520  457008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:54:56.271621  457008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:54:56.349740  457008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:54:56.432747  457008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:54:56.444975  457008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:54:56.459162  457008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:54:56.459221  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.468059  457008 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:54:56.468118  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.477045  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.485501  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.493858  457008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:54:56.501638  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.510445  457008 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.519270  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.528402  457008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:54:56.536827  457008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:54:56.544264  457008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:56.623484  457008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:54:56.736429  457008 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:54:56.736491  457008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:54:56.740613  457008 start.go:563] Will wait 60s for crictl version
	I1025 09:54:56.740677  457008 ssh_runner.go:195] Run: which crictl
	I1025 09:54:56.744278  457008 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:54:56.768009  457008 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:54:56.768081  457008 ssh_runner.go:195] Run: crio --version
	I1025 09:54:56.795678  457008 ssh_runner.go:195] Run: crio --version
	I1025 09:54:56.824108  457008 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:54:56.825165  457008 cli_runner.go:164] Run: docker network inspect embed-certs-846915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:54:56.842297  457008 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 09:54:56.847046  457008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:54:56.857067  457008 kubeadm.go:883] updating cluster {Name:embed-certs-846915 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:54:56.857171  457008 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:54:56.857214  457008 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:54:56.888963  457008 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:54:56.888988  457008 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:54:56.889036  457008 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:54:56.915006  457008 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:54:56.915029  457008 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:54:56.915037  457008 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1025 09:54:56.915134  457008 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-846915 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:54:56.915198  457008 ssh_runner.go:195] Run: crio config
	I1025 09:54:56.960405  457008 cni.go:84] Creating CNI manager for ""
	I1025 09:54:56.960425  457008 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:54:56.960446  457008 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:54:56.960476  457008 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-846915 NodeName:embed-certs-846915 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:54:56.960649  457008 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-846915"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:54:56.960737  457008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:54:56.968913  457008 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:54:56.968987  457008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:54:56.976772  457008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1025 09:54:56.989175  457008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:54:57.001654  457008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1025 09:54:57.014581  457008 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:54:57.018476  457008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:54:57.028738  457008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:57.108359  457008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:54:57.134919  457008 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915 for IP: 192.168.103.2
	I1025 09:54:57.134944  457008 certs.go:195] generating shared ca certs ...
	I1025 09:54:57.134965  457008 certs.go:227] acquiring lock for ca certs: {Name:mk84f00dc0ba6e3a6eb84ff47b0ea60692217fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:57.135148  457008 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key
	I1025 09:54:57.135208  457008 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key
	I1025 09:54:57.135221  457008 certs.go:257] generating profile certs ...
	I1025 09:54:57.135321  457008 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/client.key
	I1025 09:54:57.135400  457008 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/apiserver.key.b5da4f55
	I1025 09:54:57.135449  457008 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/proxy-client.key
	I1025 09:54:57.135591  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem (1338 bytes)
	W1025 09:54:57.135636  457008 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145_empty.pem, impossibly tiny 0 bytes
	I1025 09:54:57.135649  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:54:57.135684  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:54:57.135715  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:54:57.135746  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem (1675 bytes)
	I1025 09:54:57.135817  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:54:57.136711  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:54:57.156186  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:54:57.174513  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:54:57.194100  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:54:57.219083  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 09:54:57.237565  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:54:57.254763  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:54:57.272283  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 09:54:57.289481  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:54:57.306704  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem --> /usr/share/ca-certificates/134145.pem (1338 bytes)
	I1025 09:54:57.323681  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /usr/share/ca-certificates/1341452.pem (1708 bytes)
	I1025 09:54:57.341494  457008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:54:57.353846  457008 ssh_runner.go:195] Run: openssl version
	I1025 09:54:57.359964  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:54:57.368508  457008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:57.372486  457008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:59 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:57.372540  457008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:57.408024  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:54:57.416387  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134145.pem && ln -fs /usr/share/ca-certificates/134145.pem /etc/ssl/certs/134145.pem"
	I1025 09:54:57.424628  457008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134145.pem
	I1025 09:54:57.428201  457008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:05 /usr/share/ca-certificates/134145.pem
	I1025 09:54:57.428248  457008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134145.pem
	I1025 09:54:57.462175  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134145.pem /etc/ssl/certs/51391683.0"
	I1025 09:54:57.470726  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1341452.pem && ln -fs /usr/share/ca-certificates/1341452.pem /etc/ssl/certs/1341452.pem"
	I1025 09:54:57.479469  457008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1341452.pem
	I1025 09:54:57.483150  457008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:05 /usr/share/ca-certificates/1341452.pem
	I1025 09:54:57.483201  457008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1341452.pem
	I1025 09:54:57.516984  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1341452.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:54:57.525156  457008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:54:57.529436  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:54:57.564653  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:54:57.599517  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:54:57.635935  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:54:57.682235  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:54:57.722478  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:54:57.771292  457008 kubeadm.go:400] StartCluster: {Name:embed-certs-846915 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:57.771403  457008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:54:57.771468  457008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:54:57.809369  457008 cri.go:89] found id: "46c544af25ffafca1d729eb37ffa1959807879d6234f84e37186f47588ac6ec9"
	I1025 09:54:57.809404  457008 cri.go:89] found id: "007e89b7baf40445b09598af39cfba319acdf11728b62f56a4aaf210995d2127"
	I1025 09:54:57.809410  457008 cri.go:89] found id: "48b644dd8de53c8507fceecb6ceae794c15a6e4bfda24197562f2d2226ed7a7a"
	I1025 09:54:57.809414  457008 cri.go:89] found id: "1a49d21a7ef6b31c7d183bb24b6647a09b20b673fd98b5105086202f5e9caed0"
	I1025 09:54:57.809418  457008 cri.go:89] found id: ""
	I1025 09:54:57.809467  457008 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:54:57.823074  457008 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:54:57Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:54:57.823150  457008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:54:57.831663  457008 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:54:57.831683  457008 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:54:57.831729  457008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:54:57.839555  457008 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:54:57.840254  457008 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-846915" does not appear in /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:57.840583  457008 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-130604/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-846915" cluster setting kubeconfig missing "embed-certs-846915" context setting]
	I1025 09:54:57.841162  457008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:57.842882  457008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:54:57.850861  457008 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1025 09:54:57.850898  457008 kubeadm.go:601] duration metric: took 19.208602ms to restartPrimaryControlPlane
	I1025 09:54:57.850908  457008 kubeadm.go:402] duration metric: took 79.623638ms to StartCluster
	I1025 09:54:57.850925  457008 settings.go:142] acquiring lock: {Name:mke1e64be0ec6edf2eef6e52eb10d83b59bb8c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:57.850990  457008 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:57.852542  457008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:57.852799  457008 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:54:57.852875  457008 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:54:57.852996  457008 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-846915"
	I1025 09:54:57.853021  457008 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-846915"
	W1025 09:54:57.853035  457008 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:54:57.853054  457008 addons.go:69] Setting dashboard=true in profile "embed-certs-846915"
	I1025 09:54:57.853065  457008 config.go:182] Loaded profile config "embed-certs-846915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:57.853079  457008 addons.go:238] Setting addon dashboard=true in "embed-certs-846915"
	I1025 09:54:57.853067  457008 addons.go:69] Setting default-storageclass=true in profile "embed-certs-846915"
	W1025 09:54:57.853093  457008 addons.go:247] addon dashboard should already be in state true
	I1025 09:54:57.853104  457008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-846915"
	I1025 09:54:57.853063  457008 host.go:66] Checking if "embed-certs-846915" exists ...
	I1025 09:54:57.853128  457008 host.go:66] Checking if "embed-certs-846915" exists ...
	I1025 09:54:57.853457  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:57.853571  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:57.853627  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:57.855906  457008 out.go:179] * Verifying Kubernetes components...
	I1025 09:54:57.857196  457008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:57.879929  457008 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:54:57.879948  457008 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 09:54:57.881026  457008 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:54:57.881043  457008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:54:57.881074  457008 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1025 09:54:55.549837  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:54:57.550264  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	I1025 09:54:57.881097  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:57.881717  457008 addons.go:238] Setting addon default-storageclass=true in "embed-certs-846915"
	W1025 09:54:57.881738  457008 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:54:57.881767  457008 host.go:66] Checking if "embed-certs-846915" exists ...
	I1025 09:54:57.882197  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 09:54:57.882215  457008 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 09:54:57.882233  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:57.882272  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:57.912925  457008 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:54:57.912955  457008 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:54:57.913022  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:57.914868  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:57.916299  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:57.937956  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:57.998037  457008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:54:58.013908  457008 node_ready.go:35] waiting up to 6m0s for node "embed-certs-846915" to be "Ready" ...
	I1025 09:54:58.030429  457008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:54:58.035735  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 09:54:58.035760  457008 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 09:54:58.055893  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 09:54:58.055921  457008 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 09:54:58.057225  457008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:54:58.072489  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 09:54:58.072523  457008 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 09:54:58.091219  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 09:54:58.091239  457008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 09:54:58.108519  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 09:54:58.108542  457008 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 09:54:58.122900  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 09:54:58.122930  457008 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 09:54:58.135662  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 09:54:58.135688  457008 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 09:54:58.148215  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 09:54:58.148239  457008 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 09:54:58.160869  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:54:58.160896  457008 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 09:54:58.173696  457008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:54:59.994021  457008 node_ready.go:49] node "embed-certs-846915" is "Ready"
	I1025 09:54:59.994059  457008 node_ready.go:38] duration metric: took 1.980116383s for node "embed-certs-846915" to be "Ready" ...
	I1025 09:54:59.994078  457008 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:54:59.994133  457008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:55:00.524810  457008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.494340014s)
	I1025 09:55:00.524885  457008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.467548938s)
	I1025 09:55:00.525043  457008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.35130278s)
	I1025 09:55:00.525304  457008 api_server.go:72] duration metric: took 2.672474172s to wait for apiserver process to appear ...
	I1025 09:55:00.525323  457008 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:55:00.525339  457008 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:55:00.527109  457008 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-846915 addons enable metrics-server
	
	I1025 09:55:00.530790  457008 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:55:00.530823  457008 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:55:00.541399  457008 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1025 09:54:59.550820  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:55:02.050441  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	I1025 09:55:00.543335  457008 addons.go:514] duration metric: took 2.690467088s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1025 09:55:01.025434  457008 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:55:01.029928  457008 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:55:01.029957  457008 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:55:01.525569  457008 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:55:01.530405  457008 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1025 09:55:01.531317  457008 api_server.go:141] control plane version: v1.34.1
	I1025 09:55:01.531342  457008 api_server.go:131] duration metric: took 1.00601266s to wait for apiserver health ...
	I1025 09:55:01.531364  457008 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:55:01.534517  457008 system_pods.go:59] 8 kube-system pods found
	I1025 09:55:01.534557  457008 system_pods.go:61] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:55:01.534571  457008 system_pods.go:61] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:55:01.534580  457008 system_pods.go:61] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:55:01.534586  457008 system_pods.go:61] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:55:01.534594  457008 system_pods.go:61] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:55:01.534601  457008 system_pods.go:61] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:55:01.534607  457008 system_pods.go:61] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:55:01.534612  457008 system_pods.go:61] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Running
	I1025 09:55:01.534619  457008 system_pods.go:74] duration metric: took 3.248397ms to wait for pod list to return data ...
	I1025 09:55:01.534630  457008 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:55:01.537060  457008 default_sa.go:45] found service account: "default"
	I1025 09:55:01.537080  457008 default_sa.go:55] duration metric: took 2.439904ms for default service account to be created ...
	I1025 09:55:01.537090  457008 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:55:01.539504  457008 system_pods.go:86] 8 kube-system pods found
	I1025 09:55:01.539542  457008 system_pods.go:89] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:55:01.539555  457008 system_pods.go:89] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:55:01.539567  457008 system_pods.go:89] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:55:01.539579  457008 system_pods.go:89] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:55:01.539592  457008 system_pods.go:89] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:55:01.539604  457008 system_pods.go:89] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:55:01.539623  457008 system_pods.go:89] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:55:01.539632  457008 system_pods.go:89] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Running
	I1025 09:55:01.539642  457008 system_pods.go:126] duration metric: took 2.545561ms to wait for k8s-apps to be running ...
	I1025 09:55:01.539655  457008 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:55:01.539709  457008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:55:01.553256  457008 system_svc.go:56] duration metric: took 13.59133ms WaitForService to wait for kubelet
	I1025 09:55:01.553280  457008 kubeadm.go:586] duration metric: took 3.700453295s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:55:01.553307  457008 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:55:01.556207  457008 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:55:01.556239  457008 node_conditions.go:123] node cpu capacity is 8
	I1025 09:55:01.556252  457008 node_conditions.go:105] duration metric: took 2.940915ms to run NodePressure ...
	I1025 09:55:01.556266  457008 start.go:241] waiting for startup goroutines ...
	I1025 09:55:01.556272  457008 start.go:246] waiting for cluster config update ...
	I1025 09:55:01.556281  457008 start.go:255] writing updated cluster config ...
	I1025 09:55:01.556546  457008 ssh_runner.go:195] Run: rm -f paused
	I1025 09:55:01.560261  457008 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:55:01.563470  457008 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4w68k" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:55:03.568631  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:04.550637  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:55:07.049223  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	I1025 09:55:08.549788  449952 pod_ready.go:94] pod "coredns-66bc5c9577-29ltg" is "Ready"
	I1025 09:55:08.549821  449952 pod_ready.go:86] duration metric: took 38.005597851s for pod "coredns-66bc5c9577-29ltg" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.552948  449952 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.557263  449952 pod_ready.go:94] pod "etcd-default-k8s-diff-port-880773" is "Ready"
	I1025 09:55:08.557290  449952 pod_ready.go:86] duration metric: took 4.316609ms for pod "etcd-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.559329  449952 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.562970  449952 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-880773" is "Ready"
	I1025 09:55:08.562995  449952 pod_ready.go:86] duration metric: took 3.629414ms for pod "kube-apiserver-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.564977  449952 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.748757  449952 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-880773" is "Ready"
	I1025 09:55:08.748792  449952 pod_ready.go:86] duration metric: took 183.792651ms for pod "kube-controller-manager-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.948726  449952 pod_ready.go:83] waiting for pod "kube-proxy-bg94v" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:09.347710  449952 pod_ready.go:94] pod "kube-proxy-bg94v" is "Ready"
	I1025 09:55:09.347744  449952 pod_ready.go:86] duration metric: took 398.987622ms for pod "kube-proxy-bg94v" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:09.548542  449952 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:09.947051  449952 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-880773" is "Ready"
	I1025 09:55:09.947079  449952 pod_ready.go:86] duration metric: took 398.50407ms for pod "kube-scheduler-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:09.947091  449952 pod_ready.go:40] duration metric: took 39.406100171s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:55:09.990440  449952 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:55:10.024224  449952 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-880773" cluster and "default" namespace by default
	W1025 09:55:05.569905  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:07.571127  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:10.069750  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:12.569719  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:15.068937  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:17.569445  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:20.069705  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:22.069926  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:24.569244  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 25 09:54:40 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:40.495968899Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:54:40 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:40.499326483Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:54:40 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:40.49936112Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:54:53 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:53.536325753Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e173cab8-5a6d-4c3b-be4d-dea38ae2663a name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:53 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:53.539846838Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1862bcef-72b1-4286-9f62-826ad93b3aac name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:54:53 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:53.542923783Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d/dashboard-metrics-scraper" id=19234b2e-2171-48b9-99d5-244fb49846a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:53 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:53.54313385Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:53 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:53.550431127Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:53 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:53.550862145Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:54:53 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:53.582139057Z" level=info msg="Created container ea46cdad815525a44a51551be277a793ef57b7528437ff46dd03f0c81c0b0609: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d/dashboard-metrics-scraper" id=19234b2e-2171-48b9-99d5-244fb49846a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:54:53 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:53.586564498Z" level=info msg="Starting container: ea46cdad815525a44a51551be277a793ef57b7528437ff46dd03f0c81c0b0609" id=e081a8a0-1679-441d-8a5f-1a4c87080c7c name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:54:53 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:53.590388538Z" level=info msg="Started container" PID=1757 containerID=ea46cdad815525a44a51551be277a793ef57b7528437ff46dd03f0c81c0b0609 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d/dashboard-metrics-scraper id=e081a8a0-1679-441d-8a5f-1a4c87080c7c name=/runtime.v1.RuntimeService/StartContainer sandboxID=6e63ff4d972ed89ff8936c8d5c78b31245c823addc7956f53298ed8029d1219e
	Oct 25 09:54:54 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:54.64679347Z" level=info msg="Removing container: 91a551b2985d532ac32cefe782bd2c1da14d8aeb092e7e7e1cd2d85eb9f8e473" id=f84ef147-1345-41ff-982f-ad7b114dfdab name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:54:54 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:54:54.658592283Z" level=info msg="Removed container 91a551b2985d532ac32cefe782bd2c1da14d8aeb092e7e7e1cd2d85eb9f8e473: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d/dashboard-metrics-scraper" id=f84ef147-1345-41ff-982f-ad7b114dfdab name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.664005017Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d8207ed3-c832-4e29-ade3-a0bc70c9ca30 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.664962092Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=67de2a4e-998c-49ea-9a12-ddf1e4e5a443 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.666105183Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=606a5f03-dc74-45d4-b848-83bb7b3310ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.666233532Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.671396729Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.671597558Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1ac1e7bbb6e8bff98f2b71876db239fcd926b4f9b5c65c09f8e989b2342c0fde/merged/etc/passwd: no such file or directory"
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.67163028Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1ac1e7bbb6e8bff98f2b71876db239fcd926b4f9b5c65c09f8e989b2342c0fde/merged/etc/group: no such file or directory"
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.671888902Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.699621372Z" level=info msg="Created container 8be1b97ae8c99fdf0cfe2552030fe0a0e942bad96c59260acda25fc373b2370f: kube-system/storage-provisioner/storage-provisioner" id=606a5f03-dc74-45d4-b848-83bb7b3310ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.700211474Z" level=info msg="Starting container: 8be1b97ae8c99fdf0cfe2552030fe0a0e942bad96c59260acda25fc373b2370f" id=a3811c51-e2b0-401a-894c-4c488744fd38 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:55:00 default-k8s-diff-port-880773 crio[563]: time="2025-10-25T09:55:00.702008703Z" level=info msg="Started container" PID=1775 containerID=8be1b97ae8c99fdf0cfe2552030fe0a0e942bad96c59260acda25fc373b2370f description=kube-system/storage-provisioner/storage-provisioner id=a3811c51-e2b0-401a-894c-4c488744fd38 name=/runtime.v1.RuntimeService/StartContainer sandboxID=05743631a5c9c5c72671044fea68f435aa81d7b10af29cf143c8a43f21200b52
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	8be1b97ae8c99       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   05743631a5c9c       storage-provisioner                                    kube-system
	ea46cdad81552       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           33 seconds ago       Exited              dashboard-metrics-scraper   2                   6e63ff4d972ed       dashboard-metrics-scraper-6ffb444bf9-nv47d             kubernetes-dashboard
	14107879e5563       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago       Running             kubernetes-dashboard        0                   dd0575f9c1896       kubernetes-dashboard-855c9754f9-qzqj5                  kubernetes-dashboard
	ecf6c0ce3401b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   813d3d1b088ac       busybox                                                default
	9e0ebd1eedf1b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           57 seconds ago       Running             coredns                     0                   1e661c8a5cf2f       coredns-66bc5c9577-29ltg                               kube-system
	e5dc0927bdb5b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   05743631a5c9c       storage-provisioner                                    kube-system
	7d1412ad484fd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   5f51bb4d28158       kindnet-cnqn8                                          kube-system
	3fb115552602e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           57 seconds ago       Running             kube-proxy                  0                   4d4d94853e970       kube-proxy-bg94v                                       kube-system
	8a40c30412194       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago       Running             etcd                        0                   109744b0dbcf6       etcd-default-k8s-diff-port-880773                      kube-system
	1099e940dc59e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   cd393076c409e       kube-apiserver-default-k8s-diff-port-880773            kube-system
	b7360eb6624b8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   08b6a6a853a56       kube-controller-manager-default-k8s-diff-port-880773   kube-system
	9a7e2aef555d4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   0a2f607f7d690       kube-scheduler-default-k8s-diff-port-880773            kube-system
	
	
	==> coredns [9e0ebd1eedf1bfec2ce3bb2e23264d77a78263d4507d8318f07c179eaf43ef90] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51534 - 39285 "HINFO IN 4144776211950941106.7961538770481254160. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.070766889s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-880773
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-880773
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=default-k8s-diff-port-880773
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_53_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:52:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-880773
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:55:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:55:10 +0000   Sat, 25 Oct 2025 09:52:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:55:10 +0000   Sat, 25 Oct 2025 09:52:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:55:10 +0000   Sat, 25 Oct 2025 09:52:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:55:10 +0000   Sat, 25 Oct 2025 09:53:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-880773
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                0255f5ba-c095-4977-bf24-556780863944
	  Boot ID:                    69cac88c-fbae-449a-9884-8eb99653f5b9
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-29ltg                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m23s
	  kube-system                 etcd-default-k8s-diff-port-880773                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m28s
	  kube-system                 kindnet-cnqn8                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m23s
	  kube-system                 kube-apiserver-default-k8s-diff-port-880773             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-880773    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-proxy-bg94v                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-scheduler-default-k8s-diff-port-880773             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-nv47d              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qzqj5                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m21s                  kube-proxy       
	  Normal  Starting                 56s                    kube-proxy       
	  Normal  Starting                 2m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m33s (x8 over 2m33s)  kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m33s (x8 over 2m33s)  kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m33s (x8 over 2m33s)  kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m28s                  kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m28s                  kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m28s                  kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m28s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m24s                  node-controller  Node default-k8s-diff-port-880773 event: Registered Node default-k8s-diff-port-880773 in Controller
	  Normal  NodeReady                102s                   kubelet          Node default-k8s-diff-port-880773 status is now: NodeReady
	  Normal  Starting                 61s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-880773 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                    node-controller  Node default-k8s-diff-port-880773 event: Registered Node default-k8s-diff-port-880773 in Controller
	
	
	==> dmesg <==
	[  +0.000024] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[Oct25 09:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[ +17.952906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 b8 8e e3 56 c9 08 06
	[  +0.000656] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[Oct25 09:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	[ +20.335832] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +1.293644] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[Oct25 09:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 68 92 7c c6 14 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +0.270958] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a d0 7b 0e 4a 8d 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[ +10.676024] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000020] ll header: 00000000: ff ff ff ff ff ff 1a 10 31 a9 02 ae 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	
	
	==> etcd [8a40c304121945c99334f375a4fc8f1073390b82cca6a44c6e2b224a5804ed43] <==
	{"level":"warn","ts":"2025-10-25T09:54:28.396295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.404966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.411783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.419398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.425850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.432232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.438384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.447781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.455720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.461909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.471385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.478333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.484478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.490774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.498159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.504819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.510942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.517342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.523392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.529548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.536185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.553018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.559187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.565752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:28.609226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44288","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:55:27 up  1:37,  0 user,  load average: 3.74, 4.31, 2.81
	Linux default-k8s-diff-port-880773 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7d1412ad484fd280b8e475edb38e636d8b265e528fb5edc4c49694d11aa74026] <==
	I1025 09:54:30.178851       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:54:30.179168       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1025 09:54:30.179377       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:54:30.179394       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:54:30.179420       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:54:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:54:30.478871       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:54:30.479465       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:54:30.479481       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:54:30.479691       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:54:30.978642       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:54:30.978686       1 metrics.go:72] Registering metrics
	I1025 09:54:30.978784       1 controller.go:711] "Syncing nftables rules"
	I1025 09:54:40.478954       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:54:40.479008       1 main.go:301] handling current node
	I1025 09:54:50.483464       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:54:50.483504       1 main.go:301] handling current node
	I1025 09:55:00.478496       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:55:00.478524       1 main.go:301] handling current node
	I1025 09:55:10.479448       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:55:10.479499       1 main.go:301] handling current node
	I1025 09:55:20.481150       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1025 09:55:20.481184       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1099e940dc59e4a7fc6edf4f82c427fc4633cbc73d1759f0ef430fccd002219f] <==
	I1025 09:54:29.078487       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:54:29.078498       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 09:54:29.078600       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 09:54:29.078608       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:54:29.079185       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 09:54:29.079264       1 aggregator.go:171] initial CRD sync complete...
	I1025 09:54:29.079280       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 09:54:29.079288       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:54:29.079294       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:54:29.080363       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 09:54:29.080452       1 policy_source.go:240] refreshing policies
	I1025 09:54:29.084224       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:54:29.091647       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:54:29.117318       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:54:29.324152       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:54:29.358183       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:54:29.377436       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:54:29.384173       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:54:29.391587       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:54:29.426528       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.217.97"}
	I1025 09:54:29.437908       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.156.231"}
	I1025 09:54:29.982853       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:54:32.420411       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:54:32.862493       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:54:33.019384       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b7360eb6624b8284557553c607130a8087e3690512dcc9caea4351f9f876fd02] <==
	I1025 09:54:32.409696       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:54:32.409829       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 09:54:32.409857       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:54:32.410315       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:54:32.410398       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:54:32.410450       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:54:32.410483       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:54:32.410493       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:54:32.410811       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:54:32.410936       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-880773"
	I1025 09:54:32.410985       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 09:54:32.411077       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:54:32.411403       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:54:32.411773       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:54:32.413124       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:54:32.413142       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:54:32.413151       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:54:32.413974       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:54:32.414172       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:54:32.417043       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:54:32.418999       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:54:32.419991       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:54:32.422430       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:54:32.423624       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 09:54:32.427392       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [3fb115552602e82341d7e2918cd812563ad3b933adfcf256e50f6b6234235080] <==
	I1025 09:54:29.938497       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:54:30.010914       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:54:30.111569       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:54:30.111621       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1025 09:54:30.111755       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:54:30.135072       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:54:30.135140       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:54:30.141657       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:54:30.142104       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:54:30.142142       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:54:30.143710       1 config.go:200] "Starting service config controller"
	I1025 09:54:30.143728       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:54:30.143885       1 config.go:309] "Starting node config controller"
	I1025 09:54:30.143975       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:54:30.144159       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:54:30.144182       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:54:30.144219       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:54:30.144274       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:54:30.243913       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:54:30.245089       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:54:30.245088       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:54:30.245121       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9a7e2aef555d4452a0b73ff6d39e556aaf40affe43c7adcaf8fc119b3910c298] <==
	I1025 09:54:28.059483       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:54:29.002478       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:54:29.002518       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:54:29.002549       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:54:29.002559       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:54:29.036116       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:54:29.036148       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:54:29.038194       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:54:29.038482       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:54:29.038689       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:54:29.038801       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:54:29.139887       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:54:33 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:33.042223     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0212e92d-87ca-454e-be84-4baeec2893ff-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-nv47d\" (UID: \"0212e92d-87ca-454e-be84-4baeec2893ff\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d"
	Oct 25 09:54:35 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:35.594616     723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d" podStartSLOduration=1.394952657 podStartE2EDuration="3.594593206s" podCreationTimestamp="2025-10-25 09:54:32 +0000 UTC" firstStartedPulling="2025-10-25 09:54:33.308511987 +0000 UTC m=+6.861223472" lastFinishedPulling="2025-10-25 09:54:35.508152527 +0000 UTC m=+9.060864021" observedRunningTime="2025-10-25 09:54:35.594468653 +0000 UTC m=+9.147180152" watchObservedRunningTime="2025-10-25 09:54:35.594593206 +0000 UTC m=+9.147304705"
	Oct 25 09:54:36 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:36.587976     723 scope.go:117] "RemoveContainer" containerID="077b9fd8a06d1f7c2cf431734be80bbaeaa924d81ae6289e965eb0db669af618"
	Oct 25 09:54:37 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:37.593255     723 scope.go:117] "RemoveContainer" containerID="077b9fd8a06d1f7c2cf431734be80bbaeaa924d81ae6289e965eb0db669af618"
	Oct 25 09:54:37 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:37.593638     723 scope.go:117] "RemoveContainer" containerID="91a551b2985d532ac32cefe782bd2c1da14d8aeb092e7e7e1cd2d85eb9f8e473"
	Oct 25 09:54:37 default-k8s-diff-port-880773 kubelet[723]: E1025 09:54:37.593961     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nv47d_kubernetes-dashboard(0212e92d-87ca-454e-be84-4baeec2893ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d" podUID="0212e92d-87ca-454e-be84-4baeec2893ff"
	Oct 25 09:54:38 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:38.417926     723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 09:54:38 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:38.599321     723 scope.go:117] "RemoveContainer" containerID="91a551b2985d532ac32cefe782bd2c1da14d8aeb092e7e7e1cd2d85eb9f8e473"
	Oct 25 09:54:38 default-k8s-diff-port-880773 kubelet[723]: E1025 09:54:38.599546     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nv47d_kubernetes-dashboard(0212e92d-87ca-454e-be84-4baeec2893ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d" podUID="0212e92d-87ca-454e-be84-4baeec2893ff"
	Oct 25 09:54:39 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:39.604557     723 scope.go:117] "RemoveContainer" containerID="91a551b2985d532ac32cefe782bd2c1da14d8aeb092e7e7e1cd2d85eb9f8e473"
	Oct 25 09:54:39 default-k8s-diff-port-880773 kubelet[723]: E1025 09:54:39.604734     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nv47d_kubernetes-dashboard(0212e92d-87ca-454e-be84-4baeec2893ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d" podUID="0212e92d-87ca-454e-be84-4baeec2893ff"
	Oct 25 09:54:40 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:40.475925     723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qzqj5" podStartSLOduration=2.366601028 podStartE2EDuration="8.475900344s" podCreationTimestamp="2025-10-25 09:54:32 +0000 UTC" firstStartedPulling="2025-10-25 09:54:33.313062202 +0000 UTC m=+6.865773683" lastFinishedPulling="2025-10-25 09:54:39.422361513 +0000 UTC m=+12.975072999" observedRunningTime="2025-10-25 09:54:39.619195385 +0000 UTC m=+13.171906885" watchObservedRunningTime="2025-10-25 09:54:40.475900344 +0000 UTC m=+14.028611839"
	Oct 25 09:54:53 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:53.535743     723 scope.go:117] "RemoveContainer" containerID="91a551b2985d532ac32cefe782bd2c1da14d8aeb092e7e7e1cd2d85eb9f8e473"
	Oct 25 09:54:54 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:54.645361     723 scope.go:117] "RemoveContainer" containerID="91a551b2985d532ac32cefe782bd2c1da14d8aeb092e7e7e1cd2d85eb9f8e473"
	Oct 25 09:54:54 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:54.645581     723 scope.go:117] "RemoveContainer" containerID="ea46cdad815525a44a51551be277a793ef57b7528437ff46dd03f0c81c0b0609"
	Oct 25 09:54:54 default-k8s-diff-port-880773 kubelet[723]: E1025 09:54:54.645778     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nv47d_kubernetes-dashboard(0212e92d-87ca-454e-be84-4baeec2893ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d" podUID="0212e92d-87ca-454e-be84-4baeec2893ff"
	Oct 25 09:54:58 default-k8s-diff-port-880773 kubelet[723]: I1025 09:54:58.582833     723 scope.go:117] "RemoveContainer" containerID="ea46cdad815525a44a51551be277a793ef57b7528437ff46dd03f0c81c0b0609"
	Oct 25 09:54:58 default-k8s-diff-port-880773 kubelet[723]: E1025 09:54:58.583128     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nv47d_kubernetes-dashboard(0212e92d-87ca-454e-be84-4baeec2893ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d" podUID="0212e92d-87ca-454e-be84-4baeec2893ff"
	Oct 25 09:55:00 default-k8s-diff-port-880773 kubelet[723]: I1025 09:55:00.663586     723 scope.go:117] "RemoveContainer" containerID="e5dc0927bdb5b9abca88e2f1181fcab28f3d98593d8c76d8d66e67df6c8841e7"
	Oct 25 09:55:13 default-k8s-diff-port-880773 kubelet[723]: I1025 09:55:13.535686     723 scope.go:117] "RemoveContainer" containerID="ea46cdad815525a44a51551be277a793ef57b7528437ff46dd03f0c81c0b0609"
	Oct 25 09:55:13 default-k8s-diff-port-880773 kubelet[723]: E1025 09:55:13.535961     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nv47d_kubernetes-dashboard(0212e92d-87ca-454e-be84-4baeec2893ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nv47d" podUID="0212e92d-87ca-454e-be84-4baeec2893ff"
	Oct 25 09:55:22 default-k8s-diff-port-880773 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:55:22 default-k8s-diff-port-880773 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:55:22 default-k8s-diff-port-880773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 09:55:22 default-k8s-diff-port-880773 systemd[1]: kubelet.service: Consumed 1.807s CPU time.
	
	
	==> kubernetes-dashboard [14107879e5563ed6b5a7c822a1deb19829cc37e77da237976440a7dadb7144c1] <==
	2025/10/25 09:54:39 Starting overwatch
	2025/10/25 09:54:39 Using namespace: kubernetes-dashboard
	2025/10/25 09:54:39 Using in-cluster config to connect to apiserver
	2025/10/25 09:54:39 Using secret token for csrf signing
	2025/10/25 09:54:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:54:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:54:39 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 09:54:39 Generating JWE encryption key
	2025/10/25 09:54:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:54:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:54:39 Initializing JWE encryption key from synchronized object
	2025/10/25 09:54:39 Creating in-cluster Sidecar client
	2025/10/25 09:54:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:54:39 Serving insecurely on HTTP port: 9090
	2025/10/25 09:55:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [8be1b97ae8c99fdf0cfe2552030fe0a0e942bad96c59260acda25fc373b2370f] <==
	I1025 09:55:00.713657       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:55:00.721067       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:55:00.721112       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:55:00.723394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:04.178861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:08.441046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:12.038863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:15.092572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:18.114879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:18.119220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:55:18.119404       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:55:18.119553       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6a6b94ce-f3db-45ae-b74d-db800648c1d4", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-880773_bdc812ea-4041-4a08-b507-90e42a7f24d7 became leader
	I1025 09:55:18.119600       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-880773_bdc812ea-4041-4a08-b507-90e42a7f24d7!
	W1025 09:55:18.121288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:18.126123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:55:18.219935       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-880773_bdc812ea-4041-4a08-b507-90e42a7f24d7!
	W1025 09:55:20.129091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:20.133398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:22.136988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:22.140876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:24.143747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:24.147906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:26.150603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:26.156747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e5dc0927bdb5b9abca88e2f1181fcab28f3d98593d8c76d8d66e67df6c8841e7] <==
	I1025 09:54:29.918057       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:54:59.923822       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-880773 -n default-k8s-diff-port-880773
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-880773 -n default-k8s-diff-port-880773: exit status 2 (331.257586ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-880773 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-846915 --alsologtostderr -v=1
E1025 09:55:49.169880  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/custom-flannel-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-846915 --alsologtostderr -v=1: exit status 80 (2.292413124s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-846915 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:55:49.214400  463329 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:55:49.214669  463329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:55:49.214678  463329 out.go:374] Setting ErrFile to fd 2...
	I1025 09:55:49.214682  463329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:55:49.214860  463329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:55:49.215100  463329 out.go:368] Setting JSON to false
	I1025 09:55:49.215146  463329 mustload.go:65] Loading cluster: embed-certs-846915
	I1025 09:55:49.215519  463329 config.go:182] Loaded profile config "embed-certs-846915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:55:49.215934  463329 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:55:49.234893  463329 host.go:66] Checking if "embed-certs-846915" exists ...
	I1025 09:55:49.235208  463329 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:55:49.289569  463329 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-25 09:55:49.280152466 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:55:49.290162  463329 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-846915 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 09:55:49.292034  463329 out.go:179] * Pausing node embed-certs-846915 ... 
	I1025 09:55:49.293236  463329 host.go:66] Checking if "embed-certs-846915" exists ...
	I1025 09:55:49.293547  463329 ssh_runner.go:195] Run: systemctl --version
	I1025 09:55:49.293594  463329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:55:49.311372  463329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:55:49.410508  463329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:55:49.423291  463329 pause.go:52] kubelet running: true
	I1025 09:55:49.423399  463329 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:55:49.578072  463329 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:55:49.578188  463329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:55:49.644673  463329 cri.go:89] found id: "8fcb04b4201b14c458c49011837dbe7ebc093eadb439b95e7805d450e64ed33c"
	I1025 09:55:49.644697  463329 cri.go:89] found id: "666a8cee87b7020a849d0d0ed2e5ed7ac45f562ec0698b1bdac93a0834c88d97"
	I1025 09:55:49.644701  463329 cri.go:89] found id: "32ca438e08c054b3e50b3233e1b81fce33c79d0787be9c3e7e3baab4e4734697"
	I1025 09:55:49.644704  463329 cri.go:89] found id: "0963c187a474d790c72b9c8390401140ff56882dd70e39b8e23c8ca7acaafd5c"
	I1025 09:55:49.644707  463329 cri.go:89] found id: "7f397e67e1866c16c1c0722221598e3f82eb5387d3ab8b306224b816096ebca1"
	I1025 09:55:49.644710  463329 cri.go:89] found id: "46c544af25ffafca1d729eb37ffa1959807879d6234f84e37186f47588ac6ec9"
	I1025 09:55:49.644714  463329 cri.go:89] found id: "007e89b7baf40445b09598af39cfba319acdf11728b62f56a4aaf210995d2127"
	I1025 09:55:49.644716  463329 cri.go:89] found id: "48b644dd8de53c8507fceecb6ceae794c15a6e4bfda24197562f2d2226ed7a7a"
	I1025 09:55:49.644718  463329 cri.go:89] found id: "1a49d21a7ef6b31c7d183bb24b6647a09b20b673fd98b5105086202f5e9caed0"
	I1025 09:55:49.644738  463329 cri.go:89] found id: "377cbf4f2e049f820c8eb8e49438617564a5a9ffd5e124f133aa19b4702bde12"
	I1025 09:55:49.644740  463329 cri.go:89] found id: "586ed27083f1918f7a0180e22ca12263e87a4c0552578e80d52efc7dab81d226"
	I1025 09:55:49.644743  463329 cri.go:89] found id: ""
	I1025 09:55:49.644781  463329 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:55:49.656964  463329 retry.go:31] will retry after 311.78011ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:55:49Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:55:49.969601  463329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:55:49.982631  463329 pause.go:52] kubelet running: false
	I1025 09:55:49.982684  463329 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:55:50.121478  463329 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:55:50.121576  463329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:55:50.187531  463329 cri.go:89] found id: "8fcb04b4201b14c458c49011837dbe7ebc093eadb439b95e7805d450e64ed33c"
	I1025 09:55:50.187552  463329 cri.go:89] found id: "666a8cee87b7020a849d0d0ed2e5ed7ac45f562ec0698b1bdac93a0834c88d97"
	I1025 09:55:50.187556  463329 cri.go:89] found id: "32ca438e08c054b3e50b3233e1b81fce33c79d0787be9c3e7e3baab4e4734697"
	I1025 09:55:50.187560  463329 cri.go:89] found id: "0963c187a474d790c72b9c8390401140ff56882dd70e39b8e23c8ca7acaafd5c"
	I1025 09:55:50.187563  463329 cri.go:89] found id: "7f397e67e1866c16c1c0722221598e3f82eb5387d3ab8b306224b816096ebca1"
	I1025 09:55:50.187568  463329 cri.go:89] found id: "46c544af25ffafca1d729eb37ffa1959807879d6234f84e37186f47588ac6ec9"
	I1025 09:55:50.187572  463329 cri.go:89] found id: "007e89b7baf40445b09598af39cfba319acdf11728b62f56a4aaf210995d2127"
	I1025 09:55:50.187576  463329 cri.go:89] found id: "48b644dd8de53c8507fceecb6ceae794c15a6e4bfda24197562f2d2226ed7a7a"
	I1025 09:55:50.187581  463329 cri.go:89] found id: "1a49d21a7ef6b31c7d183bb24b6647a09b20b673fd98b5105086202f5e9caed0"
	I1025 09:55:50.187589  463329 cri.go:89] found id: "377cbf4f2e049f820c8eb8e49438617564a5a9ffd5e124f133aa19b4702bde12"
	I1025 09:55:50.187594  463329 cri.go:89] found id: "586ed27083f1918f7a0180e22ca12263e87a4c0552578e80d52efc7dab81d226"
	I1025 09:55:50.187598  463329 cri.go:89] found id: ""
	I1025 09:55:50.187645  463329 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:55:50.199474  463329 retry.go:31] will retry after 389.500593ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:55:50Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:55:50.590156  463329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:55:50.603048  463329 pause.go:52] kubelet running: false
	I1025 09:55:50.603099  463329 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:55:50.739206  463329 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:55:50.739293  463329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:55:50.807075  463329 cri.go:89] found id: "8fcb04b4201b14c458c49011837dbe7ebc093eadb439b95e7805d450e64ed33c"
	I1025 09:55:50.807102  463329 cri.go:89] found id: "666a8cee87b7020a849d0d0ed2e5ed7ac45f562ec0698b1bdac93a0834c88d97"
	I1025 09:55:50.807107  463329 cri.go:89] found id: "32ca438e08c054b3e50b3233e1b81fce33c79d0787be9c3e7e3baab4e4734697"
	I1025 09:55:50.807122  463329 cri.go:89] found id: "0963c187a474d790c72b9c8390401140ff56882dd70e39b8e23c8ca7acaafd5c"
	I1025 09:55:50.807126  463329 cri.go:89] found id: "7f397e67e1866c16c1c0722221598e3f82eb5387d3ab8b306224b816096ebca1"
	I1025 09:55:50.807129  463329 cri.go:89] found id: "46c544af25ffafca1d729eb37ffa1959807879d6234f84e37186f47588ac6ec9"
	I1025 09:55:50.807133  463329 cri.go:89] found id: "007e89b7baf40445b09598af39cfba319acdf11728b62f56a4aaf210995d2127"
	I1025 09:55:50.807136  463329 cri.go:89] found id: "48b644dd8de53c8507fceecb6ceae794c15a6e4bfda24197562f2d2226ed7a7a"
	I1025 09:55:50.807138  463329 cri.go:89] found id: "1a49d21a7ef6b31c7d183bb24b6647a09b20b673fd98b5105086202f5e9caed0"
	I1025 09:55:50.807145  463329 cri.go:89] found id: "377cbf4f2e049f820c8eb8e49438617564a5a9ffd5e124f133aa19b4702bde12"
	I1025 09:55:50.807148  463329 cri.go:89] found id: "586ed27083f1918f7a0180e22ca12263e87a4c0552578e80d52efc7dab81d226"
	I1025 09:55:50.807151  463329 cri.go:89] found id: ""
	I1025 09:55:50.807190  463329 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:55:50.819612  463329 retry.go:31] will retry after 375.689021ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:55:50Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:55:51.196298  463329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:55:51.220217  463329 pause.go:52] kubelet running: false
	I1025 09:55:51.220302  463329 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:55:51.356417  463329 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:55:51.356518  463329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:55:51.423856  463329 cri.go:89] found id: "8fcb04b4201b14c458c49011837dbe7ebc093eadb439b95e7805d450e64ed33c"
	I1025 09:55:51.423885  463329 cri.go:89] found id: "666a8cee87b7020a849d0d0ed2e5ed7ac45f562ec0698b1bdac93a0834c88d97"
	I1025 09:55:51.423891  463329 cri.go:89] found id: "32ca438e08c054b3e50b3233e1b81fce33c79d0787be9c3e7e3baab4e4734697"
	I1025 09:55:51.423897  463329 cri.go:89] found id: "0963c187a474d790c72b9c8390401140ff56882dd70e39b8e23c8ca7acaafd5c"
	I1025 09:55:51.423901  463329 cri.go:89] found id: "7f397e67e1866c16c1c0722221598e3f82eb5387d3ab8b306224b816096ebca1"
	I1025 09:55:51.423906  463329 cri.go:89] found id: "46c544af25ffafca1d729eb37ffa1959807879d6234f84e37186f47588ac6ec9"
	I1025 09:55:51.423910  463329 cri.go:89] found id: "007e89b7baf40445b09598af39cfba319acdf11728b62f56a4aaf210995d2127"
	I1025 09:55:51.423913  463329 cri.go:89] found id: "48b644dd8de53c8507fceecb6ceae794c15a6e4bfda24197562f2d2226ed7a7a"
	I1025 09:55:51.423918  463329 cri.go:89] found id: "1a49d21a7ef6b31c7d183bb24b6647a09b20b673fd98b5105086202f5e9caed0"
	I1025 09:55:51.423928  463329 cri.go:89] found id: "377cbf4f2e049f820c8eb8e49438617564a5a9ffd5e124f133aa19b4702bde12"
	I1025 09:55:51.423932  463329 cri.go:89] found id: "586ed27083f1918f7a0180e22ca12263e87a4c0552578e80d52efc7dab81d226"
	I1025 09:55:51.423935  463329 cri.go:89] found id: ""
	I1025 09:55:51.423984  463329 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:55:51.437941  463329 out.go:203] 
	W1025 09:55:51.439105  463329 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:55:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:55:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:55:51.439126  463329 out.go:285] * 
	* 
	W1025 09:55:51.443156  463329 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:55:51.444250  463329 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-846915 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-846915
helpers_test.go:243: (dbg) docker inspect embed-certs-846915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43",
	        "Created": "2025-10-25T09:53:45.12554821Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 457304,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:54:50.734949201Z",
	            "FinishedAt": "2025-10-25T09:54:49.856132584Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43/hostname",
	        "HostsPath": "/var/lib/docker/containers/95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43/hosts",
	        "LogPath": "/var/lib/docker/containers/95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43/95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43-json.log",
	        "Name": "/embed-certs-846915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-846915:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-846915",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43",
	                "LowerDir": "/var/lib/docker/overlay2/a7f2291046bf28c8b06385afeace8def42aba64bb2a48f5f68cdc889aa5b8f12-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a7f2291046bf28c8b06385afeace8def42aba64bb2a48f5f68cdc889aa5b8f12/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a7f2291046bf28c8b06385afeace8def42aba64bb2a48f5f68cdc889aa5b8f12/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a7f2291046bf28c8b06385afeace8def42aba64bb2a48f5f68cdc889aa5b8f12/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-846915",
	                "Source": "/var/lib/docker/volumes/embed-certs-846915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-846915",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-846915",
	                "name.minikube.sigs.k8s.io": "embed-certs-846915",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "407cfe59ad12d477c3277c0cb272518e9bac16cef1cbaababf185e3f3db61f5f",
	            "SandboxKey": "/var/run/docker/netns/407cfe59ad12",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33255"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33256"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33259"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33257"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33258"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-846915": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:1a:0c:72:6c:54",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "727501f067496090ecad65be87558162f256d8c8235dc960e3b62d2c325f512b",
	                    "EndpointID": "62c29b63c8f4c580541d0d27484e00500aee523f0a95fe973a0299951de8e489",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-846915",
	                        "95005cf1fe64"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-846915 -n embed-certs-846915
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-846915 -n embed-certs-846915: exit status 2 (324.823208ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-846915 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-846915 logs -n 25: (1.078223604s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-676314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-880773 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-656799 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p no-preload-656799 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ stop    │ -p default-k8s-diff-port-880773 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-880773 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ start   │ -p default-k8s-diff-port-880773 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-846915 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ stop    │ -p embed-certs-846915 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ image   │ no-preload-656799 image list --format=json                                                                                                                                                                                                    │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ pause   │ -p no-preload-656799 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ delete  │ -p no-preload-656799                                                                                                                                                                                                                          │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ delete  │ -p no-preload-656799                                                                                                                                                                                                                          │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ image   │ old-k8s-version-676314 image list --format=json                                                                                                                                                                                               │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ pause   │ -p old-k8s-version-676314 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-846915 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ start   │ -p embed-certs-846915 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:55 UTC │
	│ delete  │ -p old-k8s-version-676314                                                                                                                                                                                                                     │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ delete  │ -p old-k8s-version-676314                                                                                                                                                                                                                     │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ image   │ default-k8s-diff-port-880773 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │ 25 Oct 25 09:55 UTC │
	│ pause   │ -p default-k8s-diff-port-880773 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-880773                                                                                                                                                                                                               │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │ 25 Oct 25 09:55 UTC │
	│ delete  │ -p default-k8s-diff-port-880773                                                                                                                                                                                                               │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │ 25 Oct 25 09:55 UTC │
	│ image   │ embed-certs-846915 image list --format=json                                                                                                                                                                                                   │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │ 25 Oct 25 09:55 UTC │
	│ pause   │ -p embed-certs-846915 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:54:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:54:50.490480  457008 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:54:50.490778  457008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:50.490791  457008 out.go:374] Setting ErrFile to fd 2...
	I1025 09:54:50.490795  457008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:50.491023  457008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:54:50.491458  457008 out.go:368] Setting JSON to false
	I1025 09:54:50.492784  457008 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5834,"bootTime":1761380256,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:54:50.492874  457008 start.go:141] virtualization: kvm guest
	I1025 09:54:50.494727  457008 out.go:179] * [embed-certs-846915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:54:50.495938  457008 notify.go:220] Checking for updates...
	I1025 09:54:50.495955  457008 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:54:50.497200  457008 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:54:50.498359  457008 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:50.499624  457008 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:54:50.500821  457008 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:54:50.501999  457008 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:54:50.503677  457008 config.go:182] Loaded profile config "embed-certs-846915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:50.504213  457008 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:54:50.529014  457008 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:54:50.529154  457008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:50.591445  457008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-25 09:54:50.580621433 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:50.591560  457008 docker.go:318] overlay module found
	I1025 09:54:50.592851  457008 out.go:179] * Using the docker driver based on existing profile
	I1025 09:54:50.593988  457008 start.go:305] selected driver: docker
	I1025 09:54:50.594007  457008 start.go:925] validating driver "docker" against &{Name:embed-certs-846915 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:50.594132  457008 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:54:50.594767  457008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:50.658713  457008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-25 09:54:50.645802852 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:50.659072  457008 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:54:50.659108  457008 cni.go:84] Creating CNI manager for ""
	I1025 09:54:50.659179  457008 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:54:50.659237  457008 start.go:349] cluster config:
	{Name:embed-certs-846915 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:50.660979  457008 out.go:179] * Starting "embed-certs-846915" primary control-plane node in "embed-certs-846915" cluster
	I1025 09:54:50.662225  457008 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:54:50.663491  457008 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:54:50.664700  457008 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:54:50.664762  457008 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:54:50.664778  457008 cache.go:58] Caching tarball of preloaded images
	I1025 09:54:50.664819  457008 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:54:50.664906  457008 preload.go:233] Found /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:54:50.664923  457008 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:54:50.665060  457008 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/config.json ...
	I1025 09:54:50.686709  457008 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:54:50.686734  457008 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:54:50.686758  457008 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:54:50.686788  457008 start.go:360] acquireMachinesLock for embed-certs-846915: {Name:mk6afaad62774c341d106d1a8d37743a274e5cb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:54:50.686902  457008 start.go:364] duration metric: took 69.005µs to acquireMachinesLock for "embed-certs-846915"
	I1025 09:54:50.686926  457008 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:54:50.686937  457008 fix.go:54] fixHost starting: 
	I1025 09:54:50.687222  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:50.706726  457008 fix.go:112] recreateIfNeeded on embed-certs-846915: state=Stopped err=<nil>
	W1025 09:54:50.706755  457008 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 09:54:50.550561  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:54:53.049954  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	I1025 09:54:50.708166  457008 out.go:252] * Restarting existing docker container for "embed-certs-846915" ...
	I1025 09:54:50.708247  457008 cli_runner.go:164] Run: docker start embed-certs-846915
	I1025 09:54:50.967025  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:50.987855  457008 kic.go:430] container "embed-certs-846915" state is running.
	I1025 09:54:50.988396  457008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-846915
	I1025 09:54:51.010564  457008 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/config.json ...
	I1025 09:54:51.010825  457008 machine.go:93] provisionDockerMachine start ...
	I1025 09:54:51.010912  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:51.030680  457008 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:51.031028  457008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1025 09:54:51.031045  457008 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:54:51.031643  457008 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57398->127.0.0.1:33255: read: connection reset by peer
	I1025 09:54:54.174504  457008 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-846915
	
	I1025 09:54:54.174532  457008 ubuntu.go:182] provisioning hostname "embed-certs-846915"
	I1025 09:54:54.174596  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:54.193572  457008 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:54.193807  457008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1025 09:54:54.193820  457008 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-846915 && echo "embed-certs-846915" | sudo tee /etc/hostname
	I1025 09:54:54.343404  457008 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-846915
	
	I1025 09:54:54.343512  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:54.361545  457008 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:54.361766  457008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1025 09:54:54.361784  457008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-846915' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-846915/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-846915' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:54:54.501002  457008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:54:54.501029  457008 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 09:54:54.501072  457008 ubuntu.go:190] setting up certificates
	I1025 09:54:54.501087  457008 provision.go:84] configureAuth start
	I1025 09:54:54.501144  457008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-846915
	I1025 09:54:54.519513  457008 provision.go:143] copyHostCerts
	I1025 09:54:54.519592  457008 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem, removing ...
	I1025 09:54:54.519607  457008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:54:54.519682  457008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 09:54:54.519809  457008 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem, removing ...
	I1025 09:54:54.519821  457008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:54:54.519850  457008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 09:54:54.519924  457008 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem, removing ...
	I1025 09:54:54.519931  457008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:54:54.519959  457008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 09:54:54.520024  457008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.embed-certs-846915 san=[127.0.0.1 192.168.103.2 embed-certs-846915 localhost minikube]
	I1025 09:54:54.903702  457008 provision.go:177] copyRemoteCerts
	I1025 09:54:54.903771  457008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:54:54.903818  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:54.921801  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:55.047195  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:54:55.066909  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 09:54:55.085856  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:54:55.103394  457008 provision.go:87] duration metric: took 602.287274ms to configureAuth
	I1025 09:54:55.103426  457008 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:54:55.103621  457008 config.go:182] Loaded profile config "embed-certs-846915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:55.103746  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:55.122301  457008 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:55.122561  457008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1025 09:54:55.122584  457008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:54:55.479695  457008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:54:55.479723  457008 machine.go:96] duration metric: took 4.468883425s to provisionDockerMachine
	I1025 09:54:55.479736  457008 start.go:293] postStartSetup for "embed-certs-846915" (driver="docker")
	I1025 09:54:55.479750  457008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:54:55.479835  457008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:54:55.479894  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:55.498185  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:55.601303  457008 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:54:55.605265  457008 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:54:55.605300  457008 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:54:55.605314  457008 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 09:54:55.605388  457008 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 09:54:55.605478  457008 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> 1341452.pem in /etc/ssl/certs
	I1025 09:54:55.605582  457008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:54:55.614105  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:54:55.632538  457008 start.go:296] duration metric: took 152.784026ms for postStartSetup
	I1025 09:54:55.632624  457008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:54:55.632678  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:55.655070  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:55.753771  457008 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:54:55.758537  457008 fix.go:56] duration metric: took 5.07159091s for fixHost
	I1025 09:54:55.758571  457008 start.go:83] releasing machines lock for "embed-certs-846915", held for 5.07165484s
	I1025 09:54:55.758657  457008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-846915
	I1025 09:54:55.776411  457008 ssh_runner.go:195] Run: cat /version.json
	I1025 09:54:55.776457  457008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:54:55.776489  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:55.776531  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:55.796671  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:55.796898  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:55.952166  457008 ssh_runner.go:195] Run: systemctl --version
	I1025 09:54:55.959161  457008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:54:55.995157  457008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:54:56.000389  457008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:54:56.000452  457008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:54:56.009221  457008 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:54:56.009247  457008 start.go:495] detecting cgroup driver to use...
	I1025 09:54:56.009282  457008 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:54:56.009336  457008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:54:56.023779  457008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:54:56.037986  457008 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:54:56.038049  457008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:54:56.054727  457008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:54:56.068786  457008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:54:56.162705  457008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:54:56.244217  457008 docker.go:234] disabling docker service ...
	I1025 09:54:56.244284  457008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:54:56.258520  457008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:54:56.271621  457008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:54:56.349740  457008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:54:56.432747  457008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:54:56.444975  457008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:54:56.459162  457008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:54:56.459221  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.468059  457008 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:54:56.468118  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.477045  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.485501  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.493858  457008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:54:56.501638  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.510445  457008 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.519270  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.528402  457008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:54:56.536827  457008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:54:56.544264  457008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:56.623484  457008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:54:56.736429  457008 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:54:56.736491  457008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:54:56.740613  457008 start.go:563] Will wait 60s for crictl version
	I1025 09:54:56.740677  457008 ssh_runner.go:195] Run: which crictl
	I1025 09:54:56.744278  457008 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:54:56.768009  457008 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:54:56.768081  457008 ssh_runner.go:195] Run: crio --version
	I1025 09:54:56.795678  457008 ssh_runner.go:195] Run: crio --version
	I1025 09:54:56.824108  457008 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:54:56.825165  457008 cli_runner.go:164] Run: docker network inspect embed-certs-846915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:54:56.842297  457008 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 09:54:56.847046  457008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:54:56.857067  457008 kubeadm.go:883] updating cluster {Name:embed-certs-846915 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:54:56.857171  457008 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:54:56.857214  457008 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:54:56.888963  457008 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:54:56.888988  457008 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:54:56.889036  457008 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:54:56.915006  457008 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:54:56.915029  457008 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:54:56.915037  457008 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1025 09:54:56.915134  457008 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-846915 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:54:56.915198  457008 ssh_runner.go:195] Run: crio config
	I1025 09:54:56.960405  457008 cni.go:84] Creating CNI manager for ""
	I1025 09:54:56.960425  457008 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:54:56.960446  457008 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:54:56.960476  457008 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-846915 NodeName:embed-certs-846915 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:54:56.960649  457008 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-846915"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:54:56.960737  457008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:54:56.968913  457008 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:54:56.968987  457008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:54:56.976772  457008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1025 09:54:56.989175  457008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:54:57.001654  457008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1025 09:54:57.014581  457008 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:54:57.018476  457008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:54:57.028738  457008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:57.108359  457008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:54:57.134919  457008 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915 for IP: 192.168.103.2
	I1025 09:54:57.134944  457008 certs.go:195] generating shared ca certs ...
	I1025 09:54:57.134965  457008 certs.go:227] acquiring lock for ca certs: {Name:mk84f00dc0ba6e3a6eb84ff47b0ea60692217fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:57.135148  457008 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key
	I1025 09:54:57.135208  457008 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key
	I1025 09:54:57.135221  457008 certs.go:257] generating profile certs ...
	I1025 09:54:57.135321  457008 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/client.key
	I1025 09:54:57.135400  457008 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/apiserver.key.b5da4f55
	I1025 09:54:57.135449  457008 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/proxy-client.key
	I1025 09:54:57.135591  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem (1338 bytes)
	W1025 09:54:57.135636  457008 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145_empty.pem, impossibly tiny 0 bytes
	I1025 09:54:57.135649  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:54:57.135684  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:54:57.135715  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:54:57.135746  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem (1675 bytes)
	I1025 09:54:57.135817  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:54:57.136711  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:54:57.156186  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:54:57.174513  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:54:57.194100  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:54:57.219083  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 09:54:57.237565  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:54:57.254763  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:54:57.272283  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 09:54:57.289481  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:54:57.306704  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem --> /usr/share/ca-certificates/134145.pem (1338 bytes)
	I1025 09:54:57.323681  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /usr/share/ca-certificates/1341452.pem (1708 bytes)
	I1025 09:54:57.341494  457008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:54:57.353846  457008 ssh_runner.go:195] Run: openssl version
	I1025 09:54:57.359964  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:54:57.368508  457008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:57.372486  457008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:59 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:57.372540  457008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:57.408024  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:54:57.416387  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134145.pem && ln -fs /usr/share/ca-certificates/134145.pem /etc/ssl/certs/134145.pem"
	I1025 09:54:57.424628  457008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134145.pem
	I1025 09:54:57.428201  457008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:05 /usr/share/ca-certificates/134145.pem
	I1025 09:54:57.428248  457008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134145.pem
	I1025 09:54:57.462175  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134145.pem /etc/ssl/certs/51391683.0"
	I1025 09:54:57.470726  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1341452.pem && ln -fs /usr/share/ca-certificates/1341452.pem /etc/ssl/certs/1341452.pem"
	I1025 09:54:57.479469  457008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1341452.pem
	I1025 09:54:57.483150  457008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:05 /usr/share/ca-certificates/1341452.pem
	I1025 09:54:57.483201  457008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1341452.pem
	I1025 09:54:57.516984  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1341452.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:54:57.525156  457008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:54:57.529436  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:54:57.564653  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:54:57.599517  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:54:57.635935  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:54:57.682235  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:54:57.722478  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:54:57.771292  457008 kubeadm.go:400] StartCluster: {Name:embed-certs-846915 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:57.771403  457008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:54:57.771468  457008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:54:57.809369  457008 cri.go:89] found id: "46c544af25ffafca1d729eb37ffa1959807879d6234f84e37186f47588ac6ec9"
	I1025 09:54:57.809404  457008 cri.go:89] found id: "007e89b7baf40445b09598af39cfba319acdf11728b62f56a4aaf210995d2127"
	I1025 09:54:57.809410  457008 cri.go:89] found id: "48b644dd8de53c8507fceecb6ceae794c15a6e4bfda24197562f2d2226ed7a7a"
	I1025 09:54:57.809414  457008 cri.go:89] found id: "1a49d21a7ef6b31c7d183bb24b6647a09b20b673fd98b5105086202f5e9caed0"
	I1025 09:54:57.809418  457008 cri.go:89] found id: ""
	I1025 09:54:57.809467  457008 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:54:57.823074  457008 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:54:57Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:54:57.823150  457008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:54:57.831663  457008 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:54:57.831683  457008 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:54:57.831729  457008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:54:57.839555  457008 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:54:57.840254  457008 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-846915" does not appear in /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:57.840583  457008 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-130604/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-846915" cluster setting kubeconfig missing "embed-certs-846915" context setting]
	I1025 09:54:57.841162  457008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:57.842882  457008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:54:57.850861  457008 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1025 09:54:57.850898  457008 kubeadm.go:601] duration metric: took 19.208602ms to restartPrimaryControlPlane
	I1025 09:54:57.850908  457008 kubeadm.go:402] duration metric: took 79.623638ms to StartCluster
	I1025 09:54:57.850925  457008 settings.go:142] acquiring lock: {Name:mke1e64be0ec6edf2eef6e52eb10d83b59bb8c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:57.850990  457008 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:57.852542  457008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:57.852799  457008 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:54:57.852875  457008 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:54:57.852996  457008 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-846915"
	I1025 09:54:57.853021  457008 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-846915"
	W1025 09:54:57.853035  457008 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:54:57.853054  457008 addons.go:69] Setting dashboard=true in profile "embed-certs-846915"
	I1025 09:54:57.853065  457008 config.go:182] Loaded profile config "embed-certs-846915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:57.853079  457008 addons.go:238] Setting addon dashboard=true in "embed-certs-846915"
	I1025 09:54:57.853067  457008 addons.go:69] Setting default-storageclass=true in profile "embed-certs-846915"
	W1025 09:54:57.853093  457008 addons.go:247] addon dashboard should already be in state true
	I1025 09:54:57.853104  457008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-846915"
	I1025 09:54:57.853063  457008 host.go:66] Checking if "embed-certs-846915" exists ...
	I1025 09:54:57.853128  457008 host.go:66] Checking if "embed-certs-846915" exists ...
	I1025 09:54:57.853457  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:57.853571  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:57.853627  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:57.855906  457008 out.go:179] * Verifying Kubernetes components...
	I1025 09:54:57.857196  457008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:57.879929  457008 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:54:57.879948  457008 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 09:54:57.881026  457008 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:54:57.881043  457008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:54:57.881074  457008 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1025 09:54:55.549837  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:54:57.550264  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	I1025 09:54:57.881097  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:57.881717  457008 addons.go:238] Setting addon default-storageclass=true in "embed-certs-846915"
	W1025 09:54:57.881738  457008 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:54:57.881767  457008 host.go:66] Checking if "embed-certs-846915" exists ...
	I1025 09:54:57.882197  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 09:54:57.882215  457008 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 09:54:57.882233  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:57.882272  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:57.912925  457008 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:54:57.912955  457008 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:54:57.913022  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:57.914868  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:57.916299  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:57.937956  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:57.998037  457008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:54:58.013908  457008 node_ready.go:35] waiting up to 6m0s for node "embed-certs-846915" to be "Ready" ...
	I1025 09:54:58.030429  457008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:54:58.035735  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 09:54:58.035760  457008 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 09:54:58.055893  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 09:54:58.055921  457008 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 09:54:58.057225  457008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:54:58.072489  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 09:54:58.072523  457008 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 09:54:58.091219  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 09:54:58.091239  457008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 09:54:58.108519  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 09:54:58.108542  457008 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 09:54:58.122900  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 09:54:58.122930  457008 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 09:54:58.135662  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 09:54:58.135688  457008 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 09:54:58.148215  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 09:54:58.148239  457008 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 09:54:58.160869  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:54:58.160896  457008 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 09:54:58.173696  457008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:54:59.994021  457008 node_ready.go:49] node "embed-certs-846915" is "Ready"
	I1025 09:54:59.994059  457008 node_ready.go:38] duration metric: took 1.980116383s for node "embed-certs-846915" to be "Ready" ...
	I1025 09:54:59.994078  457008 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:54:59.994133  457008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:55:00.524810  457008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.494340014s)
	I1025 09:55:00.524885  457008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.467548938s)
	I1025 09:55:00.525043  457008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.35130278s)
	I1025 09:55:00.525304  457008 api_server.go:72] duration metric: took 2.672474172s to wait for apiserver process to appear ...
	I1025 09:55:00.525323  457008 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:55:00.525339  457008 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:55:00.527109  457008 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-846915 addons enable metrics-server
	
	I1025 09:55:00.530790  457008 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:55:00.530823  457008 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:55:00.541399  457008 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1025 09:54:59.550820  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:55:02.050441  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	I1025 09:55:00.543335  457008 addons.go:514] duration metric: took 2.690467088s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1025 09:55:01.025434  457008 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:55:01.029928  457008 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:55:01.029957  457008 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:55:01.525569  457008 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:55:01.530405  457008 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1025 09:55:01.531317  457008 api_server.go:141] control plane version: v1.34.1
	I1025 09:55:01.531342  457008 api_server.go:131] duration metric: took 1.00601266s to wait for apiserver health ...
	I1025 09:55:01.531364  457008 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:55:01.534517  457008 system_pods.go:59] 8 kube-system pods found
	I1025 09:55:01.534557  457008 system_pods.go:61] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:55:01.534571  457008 system_pods.go:61] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:55:01.534580  457008 system_pods.go:61] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:55:01.534586  457008 system_pods.go:61] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:55:01.534594  457008 system_pods.go:61] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:55:01.534601  457008 system_pods.go:61] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:55:01.534607  457008 system_pods.go:61] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:55:01.534612  457008 system_pods.go:61] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Running
	I1025 09:55:01.534619  457008 system_pods.go:74] duration metric: took 3.248397ms to wait for pod list to return data ...
	I1025 09:55:01.534630  457008 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:55:01.537060  457008 default_sa.go:45] found service account: "default"
	I1025 09:55:01.537080  457008 default_sa.go:55] duration metric: took 2.439904ms for default service account to be created ...
	I1025 09:55:01.537090  457008 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:55:01.539504  457008 system_pods.go:86] 8 kube-system pods found
	I1025 09:55:01.539542  457008 system_pods.go:89] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:55:01.539555  457008 system_pods.go:89] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:55:01.539567  457008 system_pods.go:89] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:55:01.539579  457008 system_pods.go:89] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:55:01.539592  457008 system_pods.go:89] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:55:01.539604  457008 system_pods.go:89] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:55:01.539623  457008 system_pods.go:89] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:55:01.539632  457008 system_pods.go:89] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Running
	I1025 09:55:01.539642  457008 system_pods.go:126] duration metric: took 2.545561ms to wait for k8s-apps to be running ...
	I1025 09:55:01.539655  457008 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:55:01.539709  457008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:55:01.553256  457008 system_svc.go:56] duration metric: took 13.59133ms WaitForService to wait for kubelet
	I1025 09:55:01.553280  457008 kubeadm.go:586] duration metric: took 3.700453295s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:55:01.553307  457008 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:55:01.556207  457008 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:55:01.556239  457008 node_conditions.go:123] node cpu capacity is 8
	I1025 09:55:01.556252  457008 node_conditions.go:105] duration metric: took 2.940915ms to run NodePressure ...
	I1025 09:55:01.556266  457008 start.go:241] waiting for startup goroutines ...
	I1025 09:55:01.556272  457008 start.go:246] waiting for cluster config update ...
	I1025 09:55:01.556281  457008 start.go:255] writing updated cluster config ...
	I1025 09:55:01.556546  457008 ssh_runner.go:195] Run: rm -f paused
	I1025 09:55:01.560261  457008 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:55:01.563470  457008 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4w68k" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:55:03.568631  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:04.550637  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:55:07.049223  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	I1025 09:55:08.549788  449952 pod_ready.go:94] pod "coredns-66bc5c9577-29ltg" is "Ready"
	I1025 09:55:08.549821  449952 pod_ready.go:86] duration metric: took 38.005597851s for pod "coredns-66bc5c9577-29ltg" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.552948  449952 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.557263  449952 pod_ready.go:94] pod "etcd-default-k8s-diff-port-880773" is "Ready"
	I1025 09:55:08.557290  449952 pod_ready.go:86] duration metric: took 4.316609ms for pod "etcd-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.559329  449952 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.562970  449952 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-880773" is "Ready"
	I1025 09:55:08.562995  449952 pod_ready.go:86] duration metric: took 3.629414ms for pod "kube-apiserver-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.564977  449952 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.748757  449952 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-880773" is "Ready"
	I1025 09:55:08.748792  449952 pod_ready.go:86] duration metric: took 183.792651ms for pod "kube-controller-manager-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.948726  449952 pod_ready.go:83] waiting for pod "kube-proxy-bg94v" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:09.347710  449952 pod_ready.go:94] pod "kube-proxy-bg94v" is "Ready"
	I1025 09:55:09.347744  449952 pod_ready.go:86] duration metric: took 398.987622ms for pod "kube-proxy-bg94v" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:09.548542  449952 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:09.947051  449952 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-880773" is "Ready"
	I1025 09:55:09.947079  449952 pod_ready.go:86] duration metric: took 398.50407ms for pod "kube-scheduler-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:09.947091  449952 pod_ready.go:40] duration metric: took 39.406100171s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:55:09.990440  449952 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:55:10.024224  449952 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-880773" cluster and "default" namespace by default
	W1025 09:55:05.569905  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:07.571127  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:10.069750  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:12.569719  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:15.068937  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:17.569445  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:20.069705  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:22.069926  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:24.569244  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:27.070772  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:29.569630  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:32.069368  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:34.069476  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	I1025 09:55:36.068830  457008 pod_ready.go:94] pod "coredns-66bc5c9577-4w68k" is "Ready"
	I1025 09:55:36.068861  457008 pod_ready.go:86] duration metric: took 34.505369576s for pod "coredns-66bc5c9577-4w68k" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:36.071425  457008 pod_ready.go:83] waiting for pod "etcd-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:36.075090  457008 pod_ready.go:94] pod "etcd-embed-certs-846915" is "Ready"
	I1025 09:55:36.075112  457008 pod_ready.go:86] duration metric: took 3.662871ms for pod "etcd-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:36.076946  457008 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:36.080447  457008 pod_ready.go:94] pod "kube-apiserver-embed-certs-846915" is "Ready"
	I1025 09:55:36.080468  457008 pod_ready.go:86] duration metric: took 3.502968ms for pod "kube-apiserver-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:36.082221  457008 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:36.267090  457008 pod_ready.go:94] pod "kube-controller-manager-embed-certs-846915" is "Ready"
	I1025 09:55:36.267117  457008 pod_ready.go:86] duration metric: took 184.877501ms for pod "kube-controller-manager-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:36.467383  457008 pod_ready.go:83] waiting for pod "kube-proxy-kfqqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:36.866485  457008 pod_ready.go:94] pod "kube-proxy-kfqqh" is "Ready"
	I1025 09:55:36.866512  457008 pod_ready.go:86] duration metric: took 399.107467ms for pod "kube-proxy-kfqqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:37.066668  457008 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:37.467508  457008 pod_ready.go:94] pod "kube-scheduler-embed-certs-846915" is "Ready"
	I1025 09:55:37.467545  457008 pod_ready.go:86] duration metric: took 400.847423ms for pod "kube-scheduler-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:37.467561  457008 pod_ready.go:40] duration metric: took 35.907271983s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:55:37.511553  457008 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:55:37.513178  457008 out.go:179] * Done! kubectl is now configured to use "embed-certs-846915" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:55:21 embed-certs-846915 crio[566]: time="2025-10-25T09:55:21.27343234Z" level=info msg="Started container" PID=1768 containerID=f56266cc18663bb2732f3ce06d13ab1c16f202e5f7be88e521df304dc803fdc8 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t/dashboard-metrics-scraper id=0d9a582a-b6c1-4d88-ba43-e4f8781b9036 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4884ab66bff2891bd7a594571c07dc50314033c8d6fa932ddbef76ce70fe60f0
	Oct 25 09:55:21 embed-certs-846915 crio[566]: time="2025-10-25T09:55:21.319241978Z" level=info msg="Removing container: a1edd8879e0e1d8ae383eaa15e17a558c5783664d842da83d61a04670178ab73" id=95529f1f-ad93-4820-aeca-eae168d26d63 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:55:21 embed-certs-846915 crio[566]: time="2025-10-25T09:55:21.329185176Z" level=info msg="Removed container a1edd8879e0e1d8ae383eaa15e17a558c5783664d842da83d61a04670178ab73: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t/dashboard-metrics-scraper" id=95529f1f-ad93-4820-aeca-eae168d26d63 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.347078929Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=dacf1f27-8d95-4135-916a-14d7493463d6 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.347969592Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4504680e-5feb-4898-98eb-2cdea775c750 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.348978457Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d1a6e45c-090a-4dbf-afeb-5e2bd7258b53 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.349103668Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.353596637Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.353780829Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/eeed661d3eeec36d45f7589b0ab1d22e082c62bc438818c56f79c7d8a893942c/merged/etc/passwd: no such file or directory"
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.353816701Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/eeed661d3eeec36d45f7589b0ab1d22e082c62bc438818c56f79c7d8a893942c/merged/etc/group: no such file or directory"
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.354118687Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.383507663Z" level=info msg="Created container 8fcb04b4201b14c458c49011837dbe7ebc093eadb439b95e7805d450e64ed33c: kube-system/storage-provisioner/storage-provisioner" id=d1a6e45c-090a-4dbf-afeb-5e2bd7258b53 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.384119435Z" level=info msg="Starting container: 8fcb04b4201b14c458c49011837dbe7ebc093eadb439b95e7805d450e64ed33c" id=cf8d71ff-d5b4-44b3-b0a8-b0e4eb19460c name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.386314249Z" level=info msg="Started container" PID=1782 containerID=8fcb04b4201b14c458c49011837dbe7ebc093eadb439b95e7805d450e64ed33c description=kube-system/storage-provisioner/storage-provisioner id=cf8d71ff-d5b4-44b3-b0a8-b0e4eb19460c name=/runtime.v1.RuntimeService/StartContainer sandboxID=616e6939f36526a30c945ce11bfec4a6934fb7d658c57daa00c9a10c8b588ecd
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.225388727Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fad7a670-9d0b-4831-be99-4509cd6293e2 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.226390091Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5e5ee345-9175-4e3f-9bf3-13bd7e639bf2 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.227567882Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t/dashboard-metrics-scraper" id=8b548ad4-3dcf-453a-a730-c9521bcaa623 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.227711752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.232940082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.233403039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.257034737Z" level=info msg="Created container 377cbf4f2e049f820c8eb8e49438617564a5a9ffd5e124f133aa19b4702bde12: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t/dashboard-metrics-scraper" id=8b548ad4-3dcf-453a-a730-c9521bcaa623 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.257688045Z" level=info msg="Starting container: 377cbf4f2e049f820c8eb8e49438617564a5a9ffd5e124f133aa19b4702bde12" id=42b0961c-e994-4edc-86e4-ab2dbb649ee2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.259545307Z" level=info msg="Started container" PID=1819 containerID=377cbf4f2e049f820c8eb8e49438617564a5a9ffd5e124f133aa19b4702bde12 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t/dashboard-metrics-scraper id=42b0961c-e994-4edc-86e4-ab2dbb649ee2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4884ab66bff2891bd7a594571c07dc50314033c8d6fa932ddbef76ce70fe60f0
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.382737064Z" level=info msg="Removing container: f56266cc18663bb2732f3ce06d13ab1c16f202e5f7be88e521df304dc803fdc8" id=1b81d7aa-593a-4de4-ae58-e31dcd7fbb27 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.39253446Z" level=info msg="Removed container f56266cc18663bb2732f3ce06d13ab1c16f202e5f7be88e521df304dc803fdc8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t/dashboard-metrics-scraper" id=1b81d7aa-593a-4de4-ae58-e31dcd7fbb27 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	377cbf4f2e049       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   4884ab66bff28       dashboard-metrics-scraper-6ffb444bf9-2np5t   kubernetes-dashboard
	8fcb04b4201b1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   616e6939f3652       storage-provisioner                          kube-system
	586ed27083f19       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   9bacf87816d97       kubernetes-dashboard-855c9754f9-ml7nd        kubernetes-dashboard
	e5372a56b35a9       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   31473c4357758       busybox                                      default
	666a8cee87b70       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   07dbcb23573c5       coredns-66bc5c9577-4w68k                     kube-system
	32ca438e08c05       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           51 seconds ago      Running             kube-proxy                  0                   459b19b5f05d7       kube-proxy-kfqqh                             kube-system
	0963c187a474d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   616e6939f3652       storage-provisioner                          kube-system
	7f397e67e1866       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   23ae3cd2dccc2       kindnet-khx5l                                kube-system
	46c544af25ffa       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           54 seconds ago      Running             kube-scheduler              0                   a4791f7a0be9d       kube-scheduler-embed-certs-846915            kube-system
	007e89b7baf40       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           54 seconds ago      Running             kube-apiserver              0                   57837daaa2fa4       kube-apiserver-embed-certs-846915            kube-system
	48b644dd8de53       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           54 seconds ago      Running             kube-controller-manager     0                   53c09a27459b9       kube-controller-manager-embed-certs-846915   kube-system
	1a49d21a7ef6b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           54 seconds ago      Running             etcd                        0                   6a30010f788e6       etcd-embed-certs-846915                      kube-system
	
	
	==> coredns [666a8cee87b7020a849d0d0ed2e5ed7ac45f562ec0698b1bdac93a0834c88d97] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45916 - 15853 "HINFO IN 4879031163451701237.5902946850722960915. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02213859s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-846915
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-846915
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=embed-certs-846915
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_54_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:53:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-846915
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:55:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:55:40 +0000   Sat, 25 Oct 2025 09:53:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:55:40 +0000   Sat, 25 Oct 2025 09:53:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:55:40 +0000   Sat, 25 Oct 2025 09:53:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:55:40 +0000   Sat, 25 Oct 2025 09:55:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-846915
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                7759893b-5ad2-4235-8596-bf7be856684a
	  Boot ID:                    69cac88c-fbae-449a-9884-8eb99653f5b9
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-4w68k                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-embed-certs-846915                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-khx5l                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-embed-certs-846915             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-embed-certs-846915    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-kfqqh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-embed-certs-846915             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2np5t    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ml7nd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet          Node embed-certs-846915 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet          Node embed-certs-846915 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x8 over 116s)  kubelet          Node embed-certs-846915 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     112s                 kubelet          Node embed-certs-846915 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  112s                 kubelet          Node embed-certs-846915 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s                 kubelet          Node embed-certs-846915 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 112s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node embed-certs-846915 event: Registered Node embed-certs-846915 in Controller
	  Normal  NodeReady                95s                  kubelet          Node embed-certs-846915 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node embed-certs-846915 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node embed-certs-846915 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node embed-certs-846915 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                  node-controller  Node embed-certs-846915 event: Registered Node embed-certs-846915 in Controller
	
	
	==> dmesg <==
	[  +0.000024] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[Oct25 09:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[ +17.952906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 b8 8e e3 56 c9 08 06
	[  +0.000656] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[Oct25 09:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	[ +20.335832] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +1.293644] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[Oct25 09:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 68 92 7c c6 14 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +0.270958] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a d0 7b 0e 4a 8d 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[ +10.676024] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000020] ll header: 00000000: ff ff ff ff ff ff 1a 10 31 a9 02 ae 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	
	
	==> etcd [1a49d21a7ef6b31c7d183bb24b6647a09b20b673fd98b5105086202f5e9caed0] <==
	{"level":"warn","ts":"2025-10-25T09:54:59.367593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.373618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.381094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.392056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.399742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.406843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.413195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.419525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.426732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.432962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.459538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.465821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.472333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.484760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.492021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.498321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.504935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.511142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.517183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.523591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.540589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.543995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.551567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.557454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.609140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48806","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:55:52 up  1:38,  0 user,  load average: 2.54, 3.98, 2.74
	Linux embed-certs-846915 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7f397e67e1866c16c1c0722221598e3f82eb5387d3ab8b306224b816096ebca1] <==
	I1025 09:55:00.698160       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:55:00.698423       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1025 09:55:00.698588       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:55:00.698600       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:55:00.698620       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:55:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:55:00.993328       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:55:00.993699       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:55:00.993725       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:55:00.993850       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:55:01.393340       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:55:01.393581       1 metrics.go:72] Registering metrics
	I1025 09:55:01.393673       1 controller.go:711] "Syncing nftables rules"
	I1025 09:55:10.901557       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:55:10.901621       1 main.go:301] handling current node
	I1025 09:55:20.906169       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:55:20.906229       1 main.go:301] handling current node
	I1025 09:55:30.902020       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:55:30.902078       1 main.go:301] handling current node
	I1025 09:55:40.901603       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:55:40.901669       1 main.go:301] handling current node
	I1025 09:55:50.902208       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:55:50.902244       1 main.go:301] handling current node
	
	
	==> kube-apiserver [007e89b7baf40445b09598af39cfba319acdf11728b62f56a4aaf210995d2127] <==
	I1025 09:55:00.063636       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 09:55:00.063655       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:55:00.063719       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:55:00.063582       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 09:55:00.063947       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 09:55:00.064142       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 09:55:00.064291       1 aggregator.go:171] initial CRD sync complete...
	I1025 09:55:00.064341       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 09:55:00.064394       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:55:00.064406       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:55:00.066287       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:55:00.071446       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:55:00.088409       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:55:00.092781       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:55:00.306271       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:55:00.351947       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:55:00.370690       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:55:00.378706       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:55:00.385225       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:55:00.424982       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.179.19"}
	I1025 09:55:00.437945       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.166.163"}
	I1025 09:55:00.967122       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:55:03.840970       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:55:03.889823       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:55:03.988637       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [48b644dd8de53c8507fceecb6ceae794c15a6e4bfda24197562f2d2226ed7a7a] <==
	I1025 09:55:03.338891       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:55:03.341106       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:55:03.344273       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:55:03.345422       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:55:03.347633       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:55:03.349814       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:55:03.385592       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:55:03.386761       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:55:03.386772       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:55:03.386796       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:55:03.386832       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:55:03.386844       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:55:03.386860       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:55:03.386925       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:55:03.386947       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:55:03.386953       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:55:03.386974       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:55:03.387364       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:55:03.387485       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:55:03.387633       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:55:03.392846       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:55:03.394084       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:55:03.404251       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:55:03.406500       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:55:03.410798       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [32ca438e08c054b3e50b3233e1b81fce33c79d0787be9c3e7e3baab4e4734697] <==
	I1025 09:55:00.615636       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:55:00.683233       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:55:00.783460       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:55:00.783544       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1025 09:55:00.783657       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:55:00.802722       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:55:00.802790       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:55:00.808187       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:55:00.808614       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:55:00.808648       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:55:00.809940       1 config.go:200] "Starting service config controller"
	I1025 09:55:00.809966       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:55:00.809994       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:55:00.810002       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:55:00.810083       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:55:00.810110       1 config.go:309] "Starting node config controller"
	I1025 09:55:00.810123       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:55:00.810642       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:55:00.810111       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:55:00.910262       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:55:00.910292       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:55:00.911657       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [46c544af25ffafca1d729eb37ffa1959807879d6234f84e37186f47588ac6ec9] <==
	I1025 09:54:59.034665       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:54:59.986513       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:54:59.986551       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:54:59.986569       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:54:59.986578       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:55:00.018625       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:55:00.018657       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:55:00.022036       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:55:00.022172       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:55:00.025403       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:55:00.022195       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:55:00.125859       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:55:08 embed-certs-846915 kubelet[729]: I1025 09:55:08.277983     729 scope.go:117] "RemoveContainer" containerID="a1edd8879e0e1d8ae383eaa15e17a558c5783664d842da83d61a04670178ab73"
	Oct 25 09:55:08 embed-certs-846915 kubelet[729]: E1025 09:55:08.278156     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2np5t_kubernetes-dashboard(c5ecd8db-5f39-457d-bf4d-f7aa42eca965)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t" podUID="c5ecd8db-5f39-457d-bf4d-f7aa42eca965"
	Oct 25 09:55:09 embed-certs-846915 kubelet[729]: I1025 09:55:09.283562     729 scope.go:117] "RemoveContainer" containerID="a1edd8879e0e1d8ae383eaa15e17a558c5783664d842da83d61a04670178ab73"
	Oct 25 09:55:09 embed-certs-846915 kubelet[729]: E1025 09:55:09.283797     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2np5t_kubernetes-dashboard(c5ecd8db-5f39-457d-bf4d-f7aa42eca965)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t" podUID="c5ecd8db-5f39-457d-bf4d-f7aa42eca965"
	Oct 25 09:55:10 embed-certs-846915 kubelet[729]: I1025 09:55:10.288150     729 scope.go:117] "RemoveContainer" containerID="a1edd8879e0e1d8ae383eaa15e17a558c5783664d842da83d61a04670178ab73"
	Oct 25 09:55:10 embed-certs-846915 kubelet[729]: E1025 09:55:10.288333     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2np5t_kubernetes-dashboard(c5ecd8db-5f39-457d-bf4d-f7aa42eca965)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t" podUID="c5ecd8db-5f39-457d-bf4d-f7aa42eca965"
	Oct 25 09:55:10 embed-certs-846915 kubelet[729]: I1025 09:55:10.300038     729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ml7nd" podStartSLOduration=1.4659565749999999 podStartE2EDuration="7.300016843s" podCreationTimestamp="2025-10-25 09:55:03 +0000 UTC" firstStartedPulling="2025-10-25 09:55:04.290571174 +0000 UTC m=+7.153697662" lastFinishedPulling="2025-10-25 09:55:10.124631442 +0000 UTC m=+12.987757930" observedRunningTime="2025-10-25 09:55:10.30000701 +0000 UTC m=+13.163133516" watchObservedRunningTime="2025-10-25 09:55:10.300016843 +0000 UTC m=+13.163143350"
	Oct 25 09:55:21 embed-certs-846915 kubelet[729]: I1025 09:55:21.224787     729 scope.go:117] "RemoveContainer" containerID="a1edd8879e0e1d8ae383eaa15e17a558c5783664d842da83d61a04670178ab73"
	Oct 25 09:55:21 embed-certs-846915 kubelet[729]: I1025 09:55:21.317998     729 scope.go:117] "RemoveContainer" containerID="a1edd8879e0e1d8ae383eaa15e17a558c5783664d842da83d61a04670178ab73"
	Oct 25 09:55:21 embed-certs-846915 kubelet[729]: I1025 09:55:21.318262     729 scope.go:117] "RemoveContainer" containerID="f56266cc18663bb2732f3ce06d13ab1c16f202e5f7be88e521df304dc803fdc8"
	Oct 25 09:55:21 embed-certs-846915 kubelet[729]: E1025 09:55:21.318509     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2np5t_kubernetes-dashboard(c5ecd8db-5f39-457d-bf4d-f7aa42eca965)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t" podUID="c5ecd8db-5f39-457d-bf4d-f7aa42eca965"
	Oct 25 09:55:29 embed-certs-846915 kubelet[729]: I1025 09:55:29.100785     729 scope.go:117] "RemoveContainer" containerID="f56266cc18663bb2732f3ce06d13ab1c16f202e5f7be88e521df304dc803fdc8"
	Oct 25 09:55:29 embed-certs-846915 kubelet[729]: E1025 09:55:29.101056     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2np5t_kubernetes-dashboard(c5ecd8db-5f39-457d-bf4d-f7aa42eca965)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t" podUID="c5ecd8db-5f39-457d-bf4d-f7aa42eca965"
	Oct 25 09:55:31 embed-certs-846915 kubelet[729]: I1025 09:55:31.346692     729 scope.go:117] "RemoveContainer" containerID="0963c187a474d790c72b9c8390401140ff56882dd70e39b8e23c8ca7acaafd5c"
	Oct 25 09:55:43 embed-certs-846915 kubelet[729]: I1025 09:55:43.224856     729 scope.go:117] "RemoveContainer" containerID="f56266cc18663bb2732f3ce06d13ab1c16f202e5f7be88e521df304dc803fdc8"
	Oct 25 09:55:43 embed-certs-846915 kubelet[729]: I1025 09:55:43.381313     729 scope.go:117] "RemoveContainer" containerID="f56266cc18663bb2732f3ce06d13ab1c16f202e5f7be88e521df304dc803fdc8"
	Oct 25 09:55:43 embed-certs-846915 kubelet[729]: I1025 09:55:43.381670     729 scope.go:117] "RemoveContainer" containerID="377cbf4f2e049f820c8eb8e49438617564a5a9ffd5e124f133aa19b4702bde12"
	Oct 25 09:55:43 embed-certs-846915 kubelet[729]: E1025 09:55:43.381896     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2np5t_kubernetes-dashboard(c5ecd8db-5f39-457d-bf4d-f7aa42eca965)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t" podUID="c5ecd8db-5f39-457d-bf4d-f7aa42eca965"
	Oct 25 09:55:49 embed-certs-846915 kubelet[729]: I1025 09:55:49.100523     729 scope.go:117] "RemoveContainer" containerID="377cbf4f2e049f820c8eb8e49438617564a5a9ffd5e124f133aa19b4702bde12"
	Oct 25 09:55:49 embed-certs-846915 kubelet[729]: E1025 09:55:49.101217     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2np5t_kubernetes-dashboard(c5ecd8db-5f39-457d-bf4d-f7aa42eca965)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t" podUID="c5ecd8db-5f39-457d-bf4d-f7aa42eca965"
	Oct 25 09:55:49 embed-certs-846915 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:55:49 embed-certs-846915 kubelet[729]: I1025 09:55:49.554167     729 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 25 09:55:49 embed-certs-846915 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:55:49 embed-certs-846915 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 09:55:49 embed-certs-846915 systemd[1]: kubelet.service: Consumed 1.727s CPU time.
	
	
	==> kubernetes-dashboard [586ed27083f1918f7a0180e22ca12263e87a4c0552578e80d52efc7dab81d226] <==
	2025/10/25 09:55:10 Using namespace: kubernetes-dashboard
	2025/10/25 09:55:10 Using in-cluster config to connect to apiserver
	2025/10/25 09:55:10 Using secret token for csrf signing
	2025/10/25 09:55:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:55:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:55:10 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 09:55:10 Generating JWE encryption key
	2025/10/25 09:55:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:55:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:55:10 Initializing JWE encryption key from synchronized object
	2025/10/25 09:55:10 Creating in-cluster Sidecar client
	2025/10/25 09:55:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:55:10 Serving insecurely on HTTP port: 9090
	2025/10/25 09:55:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:55:10 Starting overwatch
	
	
	==> storage-provisioner [0963c187a474d790c72b9c8390401140ff56882dd70e39b8e23c8ca7acaafd5c] <==
	I1025 09:55:00.582726       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:55:30.586767       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8fcb04b4201b14c458c49011837dbe7ebc093eadb439b95e7805d450e64ed33c] <==
	I1025 09:55:31.398313       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:55:31.404951       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:55:31.405372       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:55:31.407781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:34.863173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:39.123807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:42.722862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:45.776684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:48.799132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:48.803507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:55:48.803649       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:55:48.803817       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-846915_d7ebf65b-2913-4eb0-b547-b97a9481455a!
	I1025 09:55:48.803792       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"29bb7dfc-96d0-4f89-994b-0b96c89c26b8", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-846915_d7ebf65b-2913-4eb0-b547-b97a9481455a became leader
	W1025 09:55:48.806082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:48.809834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:55:48.904025       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-846915_d7ebf65b-2913-4eb0-b547-b97a9481455a!
	W1025 09:55:50.812531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:50.816461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:52.819943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:52.824824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-846915 -n embed-certs-846915
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-846915 -n embed-certs-846915: exit status 2 (321.88666ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-846915 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-846915
helpers_test.go:243: (dbg) docker inspect embed-certs-846915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43",
	        "Created": "2025-10-25T09:53:45.12554821Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 457304,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:54:50.734949201Z",
	            "FinishedAt": "2025-10-25T09:54:49.856132584Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43/hostname",
	        "HostsPath": "/var/lib/docker/containers/95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43/hosts",
	        "LogPath": "/var/lib/docker/containers/95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43/95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43-json.log",
	        "Name": "/embed-certs-846915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-846915:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-846915",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "95005cf1fe64ca2bacae86cf473a3ad2e6a348523a74d4f1c735ad3902166b43",
	                "LowerDir": "/var/lib/docker/overlay2/a7f2291046bf28c8b06385afeace8def42aba64bb2a48f5f68cdc889aa5b8f12-init/diff:/var/lib/docker/overlay2/539f779e972eb00c50866302b4d587edb33bfe968de070ac9b6030244b291532/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a7f2291046bf28c8b06385afeace8def42aba64bb2a48f5f68cdc889aa5b8f12/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a7f2291046bf28c8b06385afeace8def42aba64bb2a48f5f68cdc889aa5b8f12/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a7f2291046bf28c8b06385afeace8def42aba64bb2a48f5f68cdc889aa5b8f12/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-846915",
	                "Source": "/var/lib/docker/volumes/embed-certs-846915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-846915",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-846915",
	                "name.minikube.sigs.k8s.io": "embed-certs-846915",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "407cfe59ad12d477c3277c0cb272518e9bac16cef1cbaababf185e3f3db61f5f",
	            "SandboxKey": "/var/run/docker/netns/407cfe59ad12",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33255"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33256"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33259"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33257"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33258"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-846915": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:1a:0c:72:6c:54",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "727501f067496090ecad65be87558162f256d8c8235dc960e3b62d2c325f512b",
	                    "EndpointID": "62c29b63c8f4c580541d0d27484e00500aee523f0a95fe973a0299951de8e489",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-846915",
	                        "95005cf1fe64"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-846915 -n embed-certs-846915
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-846915 -n embed-certs-846915: exit status 2 (328.497139ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-846915 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-846915 logs -n 25: (1.070084606s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-676314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-880773 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-656799 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:53 UTC │
	│ start   │ -p no-preload-656799 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:53 UTC │ 25 Oct 25 09:54 UTC │
	│ stop    │ -p default-k8s-diff-port-880773 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-880773 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ start   │ -p default-k8s-diff-port-880773 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-846915 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ stop    │ -p embed-certs-846915 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ image   │ no-preload-656799 image list --format=json                                                                                                                                                                                                    │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ pause   │ -p no-preload-656799 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ delete  │ -p no-preload-656799                                                                                                                                                                                                                          │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ delete  │ -p no-preload-656799                                                                                                                                                                                                                          │ no-preload-656799            │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ image   │ old-k8s-version-676314 image list --format=json                                                                                                                                                                                               │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ pause   │ -p old-k8s-version-676314 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-846915 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ start   │ -p embed-certs-846915 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:55 UTC │
	│ delete  │ -p old-k8s-version-676314                                                                                                                                                                                                                     │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ delete  │ -p old-k8s-version-676314                                                                                                                                                                                                                     │ old-k8s-version-676314       │ jenkins │ v1.37.0 │ 25 Oct 25 09:54 UTC │ 25 Oct 25 09:54 UTC │
	│ image   │ default-k8s-diff-port-880773 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │ 25 Oct 25 09:55 UTC │
	│ pause   │ -p default-k8s-diff-port-880773 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-880773                                                                                                                                                                                                               │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │ 25 Oct 25 09:55 UTC │
	│ delete  │ -p default-k8s-diff-port-880773                                                                                                                                                                                                               │ default-k8s-diff-port-880773 │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │ 25 Oct 25 09:55 UTC │
	│ image   │ embed-certs-846915 image list --format=json                                                                                                                                                                                                   │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │ 25 Oct 25 09:55 UTC │
	│ pause   │ -p embed-certs-846915 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-846915           │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:54:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:54:50.490480  457008 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:54:50.490778  457008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:50.490791  457008 out.go:374] Setting ErrFile to fd 2...
	I1025 09:54:50.490795  457008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:54:50.491023  457008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:54:50.491458  457008 out.go:368] Setting JSON to false
	I1025 09:54:50.492784  457008 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5834,"bootTime":1761380256,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:54:50.492874  457008 start.go:141] virtualization: kvm guest
	I1025 09:54:50.494727  457008 out.go:179] * [embed-certs-846915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:54:50.495938  457008 notify.go:220] Checking for updates...
	I1025 09:54:50.495955  457008 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:54:50.497200  457008 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:54:50.498359  457008 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:50.499624  457008 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:54:50.500821  457008 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:54:50.501999  457008 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:54:50.503677  457008 config.go:182] Loaded profile config "embed-certs-846915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:50.504213  457008 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:54:50.529014  457008 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:54:50.529154  457008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:50.591445  457008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-25 09:54:50.580621433 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:50.591560  457008 docker.go:318] overlay module found
	I1025 09:54:50.592851  457008 out.go:179] * Using the docker driver based on existing profile
	I1025 09:54:50.593988  457008 start.go:305] selected driver: docker
	I1025 09:54:50.594007  457008 start.go:925] validating driver "docker" against &{Name:embed-certs-846915 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:50.594132  457008 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:54:50.594767  457008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:54:50.658713  457008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-25 09:54:50.645802852 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:54:50.659072  457008 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:54:50.659108  457008 cni.go:84] Creating CNI manager for ""
	I1025 09:54:50.659179  457008 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:54:50.659237  457008 start.go:349] cluster config:
	{Name:embed-certs-846915 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:50.660979  457008 out.go:179] * Starting "embed-certs-846915" primary control-plane node in "embed-certs-846915" cluster
	I1025 09:54:50.662225  457008 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:54:50.663491  457008 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:54:50.664700  457008 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:54:50.664762  457008 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:54:50.664778  457008 cache.go:58] Caching tarball of preloaded images
	I1025 09:54:50.664819  457008 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:54:50.664906  457008 preload.go:233] Found /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:54:50.664923  457008 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:54:50.665060  457008 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/config.json ...
	I1025 09:54:50.686709  457008 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:54:50.686734  457008 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:54:50.686758  457008 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:54:50.686788  457008 start.go:360] acquireMachinesLock for embed-certs-846915: {Name:mk6afaad62774c341d106d1a8d37743a274e5cb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:54:50.686902  457008 start.go:364] duration metric: took 69.005µs to acquireMachinesLock for "embed-certs-846915"
	I1025 09:54:50.686926  457008 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:54:50.686937  457008 fix.go:54] fixHost starting: 
	I1025 09:54:50.687222  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:50.706726  457008 fix.go:112] recreateIfNeeded on embed-certs-846915: state=Stopped err=<nil>
	W1025 09:54:50.706755  457008 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 09:54:50.550561  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:54:53.049954  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	I1025 09:54:50.708166  457008 out.go:252] * Restarting existing docker container for "embed-certs-846915" ...
	I1025 09:54:50.708247  457008 cli_runner.go:164] Run: docker start embed-certs-846915
	I1025 09:54:50.967025  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:50.987855  457008 kic.go:430] container "embed-certs-846915" state is running.
	I1025 09:54:50.988396  457008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-846915
	I1025 09:54:51.010564  457008 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/config.json ...
	I1025 09:54:51.010825  457008 machine.go:93] provisionDockerMachine start ...
	I1025 09:54:51.010912  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:51.030680  457008 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:51.031028  457008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1025 09:54:51.031045  457008 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:54:51.031643  457008 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57398->127.0.0.1:33255: read: connection reset by peer
	I1025 09:54:54.174504  457008 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-846915
	
	I1025 09:54:54.174532  457008 ubuntu.go:182] provisioning hostname "embed-certs-846915"
	I1025 09:54:54.174596  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:54.193572  457008 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:54.193807  457008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1025 09:54:54.193820  457008 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-846915 && echo "embed-certs-846915" | sudo tee /etc/hostname
	I1025 09:54:54.343404  457008 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-846915
	
	I1025 09:54:54.343512  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:54.361545  457008 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:54.361766  457008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1025 09:54:54.361784  457008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-846915' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-846915/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-846915' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:54:54.501002  457008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:54:54.501029  457008 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-130604/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-130604/.minikube}
	I1025 09:54:54.501072  457008 ubuntu.go:190] setting up certificates
	I1025 09:54:54.501087  457008 provision.go:84] configureAuth start
	I1025 09:54:54.501144  457008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-846915
	I1025 09:54:54.519513  457008 provision.go:143] copyHostCerts
	I1025 09:54:54.519592  457008 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem, removing ...
	I1025 09:54:54.519607  457008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem
	I1025 09:54:54.519682  457008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/ca.pem (1078 bytes)
	I1025 09:54:54.519809  457008 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem, removing ...
	I1025 09:54:54.519821  457008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem
	I1025 09:54:54.519850  457008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/cert.pem (1123 bytes)
	I1025 09:54:54.519924  457008 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem, removing ...
	I1025 09:54:54.519931  457008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem
	I1025 09:54:54.519959  457008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-130604/.minikube/key.pem (1675 bytes)
	I1025 09:54:54.520024  457008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem org=jenkins.embed-certs-846915 san=[127.0.0.1 192.168.103.2 embed-certs-846915 localhost minikube]
	I1025 09:54:54.903702  457008 provision.go:177] copyRemoteCerts
	I1025 09:54:54.903771  457008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:54:54.903818  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:54.921801  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:55.047195  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:54:55.066909  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 09:54:55.085856  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:54:55.103394  457008 provision.go:87] duration metric: took 602.287274ms to configureAuth
	I1025 09:54:55.103426  457008 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:54:55.103621  457008 config.go:182] Loaded profile config "embed-certs-846915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:55.103746  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:55.122301  457008 main.go:141] libmachine: Using SSH client type: native
	I1025 09:54:55.122561  457008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33255 <nil> <nil>}
	I1025 09:54:55.122584  457008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:54:55.479695  457008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:54:55.479723  457008 machine.go:96] duration metric: took 4.468883425s to provisionDockerMachine
	I1025 09:54:55.479736  457008 start.go:293] postStartSetup for "embed-certs-846915" (driver="docker")
	I1025 09:54:55.479750  457008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:54:55.479835  457008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:54:55.479894  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:55.498185  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:55.601303  457008 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:54:55.605265  457008 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:54:55.605300  457008 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:54:55.605314  457008 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/addons for local assets ...
	I1025 09:54:55.605388  457008 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-130604/.minikube/files for local assets ...
	I1025 09:54:55.605478  457008 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem -> 1341452.pem in /etc/ssl/certs
	I1025 09:54:55.605582  457008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:54:55.614105  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:54:55.632538  457008 start.go:296] duration metric: took 152.784026ms for postStartSetup
	I1025 09:54:55.632624  457008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:54:55.632678  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:55.655070  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:55.753771  457008 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:54:55.758537  457008 fix.go:56] duration metric: took 5.07159091s for fixHost
	I1025 09:54:55.758571  457008 start.go:83] releasing machines lock for "embed-certs-846915", held for 5.07165484s
	I1025 09:54:55.758657  457008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-846915
	I1025 09:54:55.776411  457008 ssh_runner.go:195] Run: cat /version.json
	I1025 09:54:55.776457  457008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:54:55.776489  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:55.776531  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:55.796671  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:55.796898  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:55.952166  457008 ssh_runner.go:195] Run: systemctl --version
	I1025 09:54:55.959161  457008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:54:55.995157  457008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:54:56.000389  457008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:54:56.000452  457008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:54:56.009221  457008 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:54:56.009247  457008 start.go:495] detecting cgroup driver to use...
	I1025 09:54:56.009282  457008 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:54:56.009336  457008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:54:56.023779  457008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:54:56.037986  457008 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:54:56.038049  457008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:54:56.054727  457008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:54:56.068786  457008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:54:56.162705  457008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:54:56.244217  457008 docker.go:234] disabling docker service ...
	I1025 09:54:56.244284  457008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:54:56.258520  457008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:54:56.271621  457008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:54:56.349740  457008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:54:56.432747  457008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:54:56.444975  457008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:54:56.459162  457008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:54:56.459221  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.468059  457008 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1025 09:54:56.468118  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.477045  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.485501  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.493858  457008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:54:56.501638  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.510445  457008 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.519270  457008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:54:56.528402  457008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:54:56.536827  457008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:54:56.544264  457008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:56.623484  457008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:54:56.736429  457008 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:54:56.736491  457008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:54:56.740613  457008 start.go:563] Will wait 60s for crictl version
	I1025 09:54:56.740677  457008 ssh_runner.go:195] Run: which crictl
	I1025 09:54:56.744278  457008 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:54:56.768009  457008 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:54:56.768081  457008 ssh_runner.go:195] Run: crio --version
	I1025 09:54:56.795678  457008 ssh_runner.go:195] Run: crio --version
	I1025 09:54:56.824108  457008 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:54:56.825165  457008 cli_runner.go:164] Run: docker network inspect embed-certs-846915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:54:56.842297  457008 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1025 09:54:56.847046  457008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:54:56.857067  457008 kubeadm.go:883] updating cluster {Name:embed-certs-846915 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:54:56.857171  457008 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:54:56.857214  457008 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:54:56.888963  457008 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:54:56.888988  457008 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:54:56.889036  457008 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:54:56.915006  457008 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:54:56.915029  457008 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:54:56.915037  457008 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1025 09:54:56.915134  457008 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-846915 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:54:56.915198  457008 ssh_runner.go:195] Run: crio config
	I1025 09:54:56.960405  457008 cni.go:84] Creating CNI manager for ""
	I1025 09:54:56.960425  457008 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:54:56.960446  457008 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:54:56.960476  457008 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-846915 NodeName:embed-certs-846915 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:54:56.960649  457008 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-846915"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:54:56.960737  457008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:54:56.968913  457008 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:54:56.968987  457008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:54:56.976772  457008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1025 09:54:56.989175  457008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:54:57.001654  457008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1025 09:54:57.014581  457008 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:54:57.018476  457008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:54:57.028738  457008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:57.108359  457008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:54:57.134919  457008 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915 for IP: 192.168.103.2
	I1025 09:54:57.134944  457008 certs.go:195] generating shared ca certs ...
	I1025 09:54:57.134965  457008 certs.go:227] acquiring lock for ca certs: {Name:mk84f00dc0ba6e3a6eb84ff47b0ea60692217fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:57.135148  457008 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key
	I1025 09:54:57.135208  457008 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key
	I1025 09:54:57.135221  457008 certs.go:257] generating profile certs ...
	I1025 09:54:57.135321  457008 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/client.key
	I1025 09:54:57.135400  457008 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/apiserver.key.b5da4f55
	I1025 09:54:57.135449  457008 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/proxy-client.key
	I1025 09:54:57.135591  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem (1338 bytes)
	W1025 09:54:57.135636  457008 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145_empty.pem, impossibly tiny 0 bytes
	I1025 09:54:57.135649  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:54:57.135684  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:54:57.135715  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:54:57.135746  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/certs/key.pem (1675 bytes)
	I1025 09:54:57.135817  457008 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem (1708 bytes)
	I1025 09:54:57.136711  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:54:57.156186  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:54:57.174513  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:54:57.194100  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:54:57.219083  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 09:54:57.237565  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:54:57.254763  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:54:57.272283  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/embed-certs-846915/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 09:54:57.289481  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:54:57.306704  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/certs/134145.pem --> /usr/share/ca-certificates/134145.pem (1338 bytes)
	I1025 09:54:57.323681  457008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/ssl/certs/1341452.pem --> /usr/share/ca-certificates/1341452.pem (1708 bytes)
	I1025 09:54:57.341494  457008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:54:57.353846  457008 ssh_runner.go:195] Run: openssl version
	I1025 09:54:57.359964  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:54:57.368508  457008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:57.372486  457008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:59 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:57.372540  457008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:54:57.408024  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:54:57.416387  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134145.pem && ln -fs /usr/share/ca-certificates/134145.pem /etc/ssl/certs/134145.pem"
	I1025 09:54:57.424628  457008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134145.pem
	I1025 09:54:57.428201  457008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:05 /usr/share/ca-certificates/134145.pem
	I1025 09:54:57.428248  457008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134145.pem
	I1025 09:54:57.462175  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134145.pem /etc/ssl/certs/51391683.0"
	I1025 09:54:57.470726  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1341452.pem && ln -fs /usr/share/ca-certificates/1341452.pem /etc/ssl/certs/1341452.pem"
	I1025 09:54:57.479469  457008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1341452.pem
	I1025 09:54:57.483150  457008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:05 /usr/share/ca-certificates/1341452.pem
	I1025 09:54:57.483201  457008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1341452.pem
	I1025 09:54:57.516984  457008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1341452.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:54:57.525156  457008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:54:57.529436  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:54:57.564653  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:54:57.599517  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:54:57.635935  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:54:57.682235  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:54:57.722478  457008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:54:57.771292  457008 kubeadm.go:400] StartCluster: {Name:embed-certs-846915 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-846915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:54:57.771403  457008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:54:57.771468  457008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:54:57.809369  457008 cri.go:89] found id: "46c544af25ffafca1d729eb37ffa1959807879d6234f84e37186f47588ac6ec9"
	I1025 09:54:57.809404  457008 cri.go:89] found id: "007e89b7baf40445b09598af39cfba319acdf11728b62f56a4aaf210995d2127"
	I1025 09:54:57.809410  457008 cri.go:89] found id: "48b644dd8de53c8507fceecb6ceae794c15a6e4bfda24197562f2d2226ed7a7a"
	I1025 09:54:57.809414  457008 cri.go:89] found id: "1a49d21a7ef6b31c7d183bb24b6647a09b20b673fd98b5105086202f5e9caed0"
	I1025 09:54:57.809418  457008 cri.go:89] found id: ""
	I1025 09:54:57.809467  457008 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:54:57.823074  457008 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:54:57Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:54:57.823150  457008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:54:57.831663  457008 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:54:57.831683  457008 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:54:57.831729  457008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:54:57.839555  457008 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:54:57.840254  457008 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-846915" does not appear in /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:57.840583  457008 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-130604/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-846915" cluster setting kubeconfig missing "embed-certs-846915" context setting]
	I1025 09:54:57.841162  457008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:57.842882  457008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:54:57.850861  457008 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1025 09:54:57.850898  457008 kubeadm.go:601] duration metric: took 19.208602ms to restartPrimaryControlPlane
	I1025 09:54:57.850908  457008 kubeadm.go:402] duration metric: took 79.623638ms to StartCluster
	I1025 09:54:57.850925  457008 settings.go:142] acquiring lock: {Name:mke1e64be0ec6edf2eef6e52eb10d83b59bb8c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:57.850990  457008 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:54:57.852542  457008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-130604/kubeconfig: {Name:mk77c1bcf7006fb0fbcd63044d310ce53011c0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:54:57.852799  457008 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:54:57.852875  457008 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:54:57.852996  457008 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-846915"
	I1025 09:54:57.853021  457008 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-846915"
	W1025 09:54:57.853035  457008 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:54:57.853054  457008 addons.go:69] Setting dashboard=true in profile "embed-certs-846915"
	I1025 09:54:57.853065  457008 config.go:182] Loaded profile config "embed-certs-846915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:54:57.853079  457008 addons.go:238] Setting addon dashboard=true in "embed-certs-846915"
	I1025 09:54:57.853067  457008 addons.go:69] Setting default-storageclass=true in profile "embed-certs-846915"
	W1025 09:54:57.853093  457008 addons.go:247] addon dashboard should already be in state true
	I1025 09:54:57.853104  457008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-846915"
	I1025 09:54:57.853063  457008 host.go:66] Checking if "embed-certs-846915" exists ...
	I1025 09:54:57.853128  457008 host.go:66] Checking if "embed-certs-846915" exists ...
	I1025 09:54:57.853457  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:57.853571  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:57.853627  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:57.855906  457008 out.go:179] * Verifying Kubernetes components...
	I1025 09:54:57.857196  457008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:54:57.879929  457008 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:54:57.879948  457008 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 09:54:57.881026  457008 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:54:57.881043  457008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:54:57.881074  457008 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1025 09:54:55.549837  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:54:57.550264  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	I1025 09:54:57.881097  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:57.881717  457008 addons.go:238] Setting addon default-storageclass=true in "embed-certs-846915"
	W1025 09:54:57.881738  457008 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:54:57.881767  457008 host.go:66] Checking if "embed-certs-846915" exists ...
	I1025 09:54:57.882197  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 09:54:57.882215  457008 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 09:54:57.882233  457008 cli_runner.go:164] Run: docker container inspect embed-certs-846915 --format={{.State.Status}}
	I1025 09:54:57.882272  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:57.912925  457008 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:54:57.912955  457008 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:54:57.913022  457008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-846915
	I1025 09:54:57.914868  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:57.916299  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:57.937956  457008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/embed-certs-846915/id_rsa Username:docker}
	I1025 09:54:57.998037  457008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:54:58.013908  457008 node_ready.go:35] waiting up to 6m0s for node "embed-certs-846915" to be "Ready" ...
	I1025 09:54:58.030429  457008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:54:58.035735  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 09:54:58.035760  457008 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 09:54:58.055893  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 09:54:58.055921  457008 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 09:54:58.057225  457008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:54:58.072489  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 09:54:58.072523  457008 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 09:54:58.091219  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 09:54:58.091239  457008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 09:54:58.108519  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 09:54:58.108542  457008 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 09:54:58.122900  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 09:54:58.122930  457008 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 09:54:58.135662  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 09:54:58.135688  457008 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 09:54:58.148215  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 09:54:58.148239  457008 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 09:54:58.160869  457008 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:54:58.160896  457008 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 09:54:58.173696  457008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:54:59.994021  457008 node_ready.go:49] node "embed-certs-846915" is "Ready"
	I1025 09:54:59.994059  457008 node_ready.go:38] duration metric: took 1.980116383s for node "embed-certs-846915" to be "Ready" ...
	I1025 09:54:59.994078  457008 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:54:59.994133  457008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:55:00.524810  457008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.494340014s)
	I1025 09:55:00.524885  457008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.467548938s)
	I1025 09:55:00.525043  457008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.35130278s)
	I1025 09:55:00.525304  457008 api_server.go:72] duration metric: took 2.672474172s to wait for apiserver process to appear ...
	I1025 09:55:00.525323  457008 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:55:00.525339  457008 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:55:00.527109  457008 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-846915 addons enable metrics-server
	
	I1025 09:55:00.530790  457008 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:55:00.530823  457008 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:55:00.541399  457008 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1025 09:54:59.550820  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:55:02.050441  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	I1025 09:55:00.543335  457008 addons.go:514] duration metric: took 2.690467088s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1025 09:55:01.025434  457008 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:55:01.029928  457008 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:55:01.029957  457008 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:55:01.525569  457008 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1025 09:55:01.530405  457008 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1025 09:55:01.531317  457008 api_server.go:141] control plane version: v1.34.1
	I1025 09:55:01.531342  457008 api_server.go:131] duration metric: took 1.00601266s to wait for apiserver health ...
	I1025 09:55:01.531364  457008 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:55:01.534517  457008 system_pods.go:59] 8 kube-system pods found
	I1025 09:55:01.534557  457008 system_pods.go:61] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:55:01.534571  457008 system_pods.go:61] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:55:01.534580  457008 system_pods.go:61] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:55:01.534586  457008 system_pods.go:61] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:55:01.534594  457008 system_pods.go:61] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:55:01.534601  457008 system_pods.go:61] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:55:01.534607  457008 system_pods.go:61] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:55:01.534612  457008 system_pods.go:61] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Running
	I1025 09:55:01.534619  457008 system_pods.go:74] duration metric: took 3.248397ms to wait for pod list to return data ...
	I1025 09:55:01.534630  457008 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:55:01.537060  457008 default_sa.go:45] found service account: "default"
	I1025 09:55:01.537080  457008 default_sa.go:55] duration metric: took 2.439904ms for default service account to be created ...
	I1025 09:55:01.537090  457008 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:55:01.539504  457008 system_pods.go:86] 8 kube-system pods found
	I1025 09:55:01.539542  457008 system_pods.go:89] "coredns-66bc5c9577-4w68k" [6743bff9-71c8-4295-960d-62d0c277c109] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:55:01.539555  457008 system_pods.go:89] "etcd-embed-certs-846915" [0534a916-54fc-4122-94aa-28f7c8a10b26] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:55:01.539567  457008 system_pods.go:89] "kindnet-khx5l" [333a7d45-8903-4f7d-a7be-87cb28de77fa] Running
	I1025 09:55:01.539579  457008 system_pods.go:89] "kube-apiserver-embed-certs-846915" [9684b8d6-1354-4051-b2b5-dfd61695a558] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:55:01.539592  457008 system_pods.go:89] "kube-controller-manager-embed-certs-846915" [2539bf54-8018-45bc-ad31-4a85d5895591] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:55:01.539604  457008 system_pods.go:89] "kube-proxy-kfqqh" [1ff535da-325f-4c85-a30a-d044753b2895] Running
	I1025 09:55:01.539623  457008 system_pods.go:89] "kube-scheduler-embed-certs-846915" [634ae567-fddf-46dc-9bb4-bbb37945db24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:55:01.539632  457008 system_pods.go:89] "storage-provisioner" [fc6c4fe9-5fd9-455b-b7c8-c225c7322eb8] Running
	I1025 09:55:01.539642  457008 system_pods.go:126] duration metric: took 2.545561ms to wait for k8s-apps to be running ...
	I1025 09:55:01.539655  457008 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:55:01.539709  457008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:55:01.553256  457008 system_svc.go:56] duration metric: took 13.59133ms WaitForService to wait for kubelet
	I1025 09:55:01.553280  457008 kubeadm.go:586] duration metric: took 3.700453295s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:55:01.553307  457008 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:55:01.556207  457008 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:55:01.556239  457008 node_conditions.go:123] node cpu capacity is 8
	I1025 09:55:01.556252  457008 node_conditions.go:105] duration metric: took 2.940915ms to run NodePressure ...
	I1025 09:55:01.556266  457008 start.go:241] waiting for startup goroutines ...
	I1025 09:55:01.556272  457008 start.go:246] waiting for cluster config update ...
	I1025 09:55:01.556281  457008 start.go:255] writing updated cluster config ...
	I1025 09:55:01.556546  457008 ssh_runner.go:195] Run: rm -f paused
	I1025 09:55:01.560261  457008 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:55:01.563470  457008 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4w68k" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:55:03.568631  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:04.550637  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	W1025 09:55:07.049223  449952 pod_ready.go:104] pod "coredns-66bc5c9577-29ltg" is not "Ready", error: <nil>
	I1025 09:55:08.549788  449952 pod_ready.go:94] pod "coredns-66bc5c9577-29ltg" is "Ready"
	I1025 09:55:08.549821  449952 pod_ready.go:86] duration metric: took 38.005597851s for pod "coredns-66bc5c9577-29ltg" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.552948  449952 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.557263  449952 pod_ready.go:94] pod "etcd-default-k8s-diff-port-880773" is "Ready"
	I1025 09:55:08.557290  449952 pod_ready.go:86] duration metric: took 4.316609ms for pod "etcd-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.559329  449952 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.562970  449952 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-880773" is "Ready"
	I1025 09:55:08.562995  449952 pod_ready.go:86] duration metric: took 3.629414ms for pod "kube-apiserver-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.564977  449952 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.748757  449952 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-880773" is "Ready"
	I1025 09:55:08.748792  449952 pod_ready.go:86] duration metric: took 183.792651ms for pod "kube-controller-manager-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:08.948726  449952 pod_ready.go:83] waiting for pod "kube-proxy-bg94v" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:09.347710  449952 pod_ready.go:94] pod "kube-proxy-bg94v" is "Ready"
	I1025 09:55:09.347744  449952 pod_ready.go:86] duration metric: took 398.987622ms for pod "kube-proxy-bg94v" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:09.548542  449952 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:09.947051  449952 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-880773" is "Ready"
	I1025 09:55:09.947079  449952 pod_ready.go:86] duration metric: took 398.50407ms for pod "kube-scheduler-default-k8s-diff-port-880773" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:09.947091  449952 pod_ready.go:40] duration metric: took 39.406100171s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:55:09.990440  449952 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:55:10.024224  449952 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-880773" cluster and "default" namespace by default
	W1025 09:55:05.569905  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:07.571127  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:10.069750  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:12.569719  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:15.068937  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:17.569445  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:20.069705  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:22.069926  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:24.569244  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:27.070772  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:29.569630  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:32.069368  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	W1025 09:55:34.069476  457008 pod_ready.go:104] pod "coredns-66bc5c9577-4w68k" is not "Ready", error: <nil>
	I1025 09:55:36.068830  457008 pod_ready.go:94] pod "coredns-66bc5c9577-4w68k" is "Ready"
	I1025 09:55:36.068861  457008 pod_ready.go:86] duration metric: took 34.505369576s for pod "coredns-66bc5c9577-4w68k" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:36.071425  457008 pod_ready.go:83] waiting for pod "etcd-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:36.075090  457008 pod_ready.go:94] pod "etcd-embed-certs-846915" is "Ready"
	I1025 09:55:36.075112  457008 pod_ready.go:86] duration metric: took 3.662871ms for pod "etcd-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:36.076946  457008 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:36.080447  457008 pod_ready.go:94] pod "kube-apiserver-embed-certs-846915" is "Ready"
	I1025 09:55:36.080468  457008 pod_ready.go:86] duration metric: took 3.502968ms for pod "kube-apiserver-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:36.082221  457008 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:36.267090  457008 pod_ready.go:94] pod "kube-controller-manager-embed-certs-846915" is "Ready"
	I1025 09:55:36.267117  457008 pod_ready.go:86] duration metric: took 184.877501ms for pod "kube-controller-manager-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:36.467383  457008 pod_ready.go:83] waiting for pod "kube-proxy-kfqqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:36.866485  457008 pod_ready.go:94] pod "kube-proxy-kfqqh" is "Ready"
	I1025 09:55:36.866512  457008 pod_ready.go:86] duration metric: took 399.107467ms for pod "kube-proxy-kfqqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:37.066668  457008 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:37.467508  457008 pod_ready.go:94] pod "kube-scheduler-embed-certs-846915" is "Ready"
	I1025 09:55:37.467545  457008 pod_ready.go:86] duration metric: took 400.847423ms for pod "kube-scheduler-embed-certs-846915" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:55:37.467561  457008 pod_ready.go:40] duration metric: took 35.907271983s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:55:37.511553  457008 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:55:37.513178  457008 out.go:179] * Done! kubectl is now configured to use "embed-certs-846915" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:55:21 embed-certs-846915 crio[566]: time="2025-10-25T09:55:21.27343234Z" level=info msg="Started container" PID=1768 containerID=f56266cc18663bb2732f3ce06d13ab1c16f202e5f7be88e521df304dc803fdc8 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t/dashboard-metrics-scraper id=0d9a582a-b6c1-4d88-ba43-e4f8781b9036 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4884ab66bff2891bd7a594571c07dc50314033c8d6fa932ddbef76ce70fe60f0
	Oct 25 09:55:21 embed-certs-846915 crio[566]: time="2025-10-25T09:55:21.319241978Z" level=info msg="Removing container: a1edd8879e0e1d8ae383eaa15e17a558c5783664d842da83d61a04670178ab73" id=95529f1f-ad93-4820-aeca-eae168d26d63 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:55:21 embed-certs-846915 crio[566]: time="2025-10-25T09:55:21.329185176Z" level=info msg="Removed container a1edd8879e0e1d8ae383eaa15e17a558c5783664d842da83d61a04670178ab73: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t/dashboard-metrics-scraper" id=95529f1f-ad93-4820-aeca-eae168d26d63 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.347078929Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=dacf1f27-8d95-4135-916a-14d7493463d6 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.347969592Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4504680e-5feb-4898-98eb-2cdea775c750 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.348978457Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d1a6e45c-090a-4dbf-afeb-5e2bd7258b53 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.349103668Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.353596637Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.353780829Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/eeed661d3eeec36d45f7589b0ab1d22e082c62bc438818c56f79c7d8a893942c/merged/etc/passwd: no such file or directory"
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.353816701Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/eeed661d3eeec36d45f7589b0ab1d22e082c62bc438818c56f79c7d8a893942c/merged/etc/group: no such file or directory"
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.354118687Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.383507663Z" level=info msg="Created container 8fcb04b4201b14c458c49011837dbe7ebc093eadb439b95e7805d450e64ed33c: kube-system/storage-provisioner/storage-provisioner" id=d1a6e45c-090a-4dbf-afeb-5e2bd7258b53 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.384119435Z" level=info msg="Starting container: 8fcb04b4201b14c458c49011837dbe7ebc093eadb439b95e7805d450e64ed33c" id=cf8d71ff-d5b4-44b3-b0a8-b0e4eb19460c name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:55:31 embed-certs-846915 crio[566]: time="2025-10-25T09:55:31.386314249Z" level=info msg="Started container" PID=1782 containerID=8fcb04b4201b14c458c49011837dbe7ebc093eadb439b95e7805d450e64ed33c description=kube-system/storage-provisioner/storage-provisioner id=cf8d71ff-d5b4-44b3-b0a8-b0e4eb19460c name=/runtime.v1.RuntimeService/StartContainer sandboxID=616e6939f36526a30c945ce11bfec4a6934fb7d658c57daa00c9a10c8b588ecd
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.225388727Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fad7a670-9d0b-4831-be99-4509cd6293e2 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.226390091Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5e5ee345-9175-4e3f-9bf3-13bd7e639bf2 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.227567882Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t/dashboard-metrics-scraper" id=8b548ad4-3dcf-453a-a730-c9521bcaa623 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.227711752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.232940082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.233403039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.257034737Z" level=info msg="Created container 377cbf4f2e049f820c8eb8e49438617564a5a9ffd5e124f133aa19b4702bde12: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t/dashboard-metrics-scraper" id=8b548ad4-3dcf-453a-a730-c9521bcaa623 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.257688045Z" level=info msg="Starting container: 377cbf4f2e049f820c8eb8e49438617564a5a9ffd5e124f133aa19b4702bde12" id=42b0961c-e994-4edc-86e4-ab2dbb649ee2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.259545307Z" level=info msg="Started container" PID=1819 containerID=377cbf4f2e049f820c8eb8e49438617564a5a9ffd5e124f133aa19b4702bde12 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t/dashboard-metrics-scraper id=42b0961c-e994-4edc-86e4-ab2dbb649ee2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4884ab66bff2891bd7a594571c07dc50314033c8d6fa932ddbef76ce70fe60f0
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.382737064Z" level=info msg="Removing container: f56266cc18663bb2732f3ce06d13ab1c16f202e5f7be88e521df304dc803fdc8" id=1b81d7aa-593a-4de4-ae58-e31dcd7fbb27 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:55:43 embed-certs-846915 crio[566]: time="2025-10-25T09:55:43.39253446Z" level=info msg="Removed container f56266cc18663bb2732f3ce06d13ab1c16f202e5f7be88e521df304dc803fdc8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t/dashboard-metrics-scraper" id=1b81d7aa-593a-4de4-ae58-e31dcd7fbb27 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	377cbf4f2e049       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   4884ab66bff28       dashboard-metrics-scraper-6ffb444bf9-2np5t   kubernetes-dashboard
	8fcb04b4201b1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   616e6939f3652       storage-provisioner                          kube-system
	586ed27083f19       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   9bacf87816d97       kubernetes-dashboard-855c9754f9-ml7nd        kubernetes-dashboard
	e5372a56b35a9       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   31473c4357758       busybox                                      default
	666a8cee87b70       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   07dbcb23573c5       coredns-66bc5c9577-4w68k                     kube-system
	32ca438e08c05       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   459b19b5f05d7       kube-proxy-kfqqh                             kube-system
	0963c187a474d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   616e6939f3652       storage-provisioner                          kube-system
	7f397e67e1866       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   23ae3cd2dccc2       kindnet-khx5l                                kube-system
	46c544af25ffa       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   a4791f7a0be9d       kube-scheduler-embed-certs-846915            kube-system
	007e89b7baf40       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   57837daaa2fa4       kube-apiserver-embed-certs-846915            kube-system
	48b644dd8de53       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   53c09a27459b9       kube-controller-manager-embed-certs-846915   kube-system
	1a49d21a7ef6b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   6a30010f788e6       etcd-embed-certs-846915                      kube-system
	
	
	==> coredns [666a8cee87b7020a849d0d0ed2e5ed7ac45f562ec0698b1bdac93a0834c88d97] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45916 - 15853 "HINFO IN 4879031163451701237.5902946850722960915. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02213859s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-846915
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-846915
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=embed-certs-846915
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_54_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:53:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-846915
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:55:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:55:40 +0000   Sat, 25 Oct 2025 09:53:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:55:40 +0000   Sat, 25 Oct 2025 09:53:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:55:40 +0000   Sat, 25 Oct 2025 09:53:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:55:40 +0000   Sat, 25 Oct 2025 09:55:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-846915
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                7759893b-5ad2-4235-8596-bf7be856684a
	  Boot ID:                    69cac88c-fbae-449a-9884-8eb99653f5b9
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-4w68k                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-embed-certs-846915                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-khx5l                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-embed-certs-846915             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-embed-certs-846915    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-kfqqh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-embed-certs-846915             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2np5t    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ml7nd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 53s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet          Node embed-certs-846915 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet          Node embed-certs-846915 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x8 over 118s)  kubelet          Node embed-certs-846915 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     114s                 kubelet          Node embed-certs-846915 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  114s                 kubelet          Node embed-certs-846915 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s                 kubelet          Node embed-certs-846915 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node embed-certs-846915 event: Registered Node embed-certs-846915 in Controller
	  Normal  NodeReady                97s                  kubelet          Node embed-certs-846915 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node embed-certs-846915 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node embed-certs-846915 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node embed-certs-846915 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                  node-controller  Node embed-certs-846915 event: Registered Node embed-certs-846915 in Controller
	
	
	==> dmesg <==
	[  +0.000024] ll header: 00000000: c6 7a 04 17 65 c0 82 39 d8 13 be 4b 08 00
	[Oct25 09:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[ +17.952906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 b8 8e e3 56 c9 08 06
	[  +0.000656] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 9a 6b 0e 1b b1 08 06
	[Oct25 09:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	[ +20.335832] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +1.293644] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[Oct25 09:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 68 92 7c c6 14 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a a9 7e 39 c7 42 08 06
	[  +0.270958] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0a d0 7b 0e 4a 8d 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6c 04 39 65 29 08 06
	[ +10.676024] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000020] ll header: 00000000: ff ff ff ff ff ff 1a 10 31 a9 02 ae 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 26 c3 7e 5b f8 fd 08 06
	
	
	==> etcd [1a49d21a7ef6b31c7d183bb24b6647a09b20b673fd98b5105086202f5e9caed0] <==
	{"level":"warn","ts":"2025-10-25T09:54:59.367593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.373618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.381094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.392056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.399742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.406843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.413195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.419525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.426732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.432962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.459538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.465821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.472333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.484760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.492021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.498321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.504935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.511142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.517183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.523591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.540589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.543995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.551567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.557454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:54:59.609140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48806","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:55:54 up  1:38,  0 user,  load average: 2.50, 3.95, 2.74
	Linux embed-certs-846915 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7f397e67e1866c16c1c0722221598e3f82eb5387d3ab8b306224b816096ebca1] <==
	I1025 09:55:00.698160       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:55:00.698423       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1025 09:55:00.698588       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:55:00.698600       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:55:00.698620       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:55:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:55:00.993328       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:55:00.993699       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:55:00.993725       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:55:00.993850       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:55:01.393340       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:55:01.393581       1 metrics.go:72] Registering metrics
	I1025 09:55:01.393673       1 controller.go:711] "Syncing nftables rules"
	I1025 09:55:10.901557       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:55:10.901621       1 main.go:301] handling current node
	I1025 09:55:20.906169       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:55:20.906229       1 main.go:301] handling current node
	I1025 09:55:30.902020       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:55:30.902078       1 main.go:301] handling current node
	I1025 09:55:40.901603       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:55:40.901669       1 main.go:301] handling current node
	I1025 09:55:50.902208       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1025 09:55:50.902244       1 main.go:301] handling current node
	
	
	==> kube-apiserver [007e89b7baf40445b09598af39cfba319acdf11728b62f56a4aaf210995d2127] <==
	I1025 09:55:00.063636       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 09:55:00.063655       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:55:00.063719       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:55:00.063582       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 09:55:00.063947       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 09:55:00.064142       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 09:55:00.064291       1 aggregator.go:171] initial CRD sync complete...
	I1025 09:55:00.064341       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 09:55:00.064394       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:55:00.064406       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:55:00.066287       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:55:00.071446       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:55:00.088409       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:55:00.092781       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:55:00.306271       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:55:00.351947       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:55:00.370690       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:55:00.378706       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:55:00.385225       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:55:00.424982       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.179.19"}
	I1025 09:55:00.437945       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.166.163"}
	I1025 09:55:00.967122       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:55:03.840970       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:55:03.889823       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:55:03.988637       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [48b644dd8de53c8507fceecb6ceae794c15a6e4bfda24197562f2d2226ed7a7a] <==
	I1025 09:55:03.338891       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:55:03.341106       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:55:03.344273       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:55:03.345422       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:55:03.347633       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:55:03.349814       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:55:03.385592       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:55:03.386761       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:55:03.386772       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:55:03.386796       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:55:03.386832       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:55:03.386844       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:55:03.386860       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:55:03.386925       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:55:03.386947       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:55:03.386953       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:55:03.386974       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:55:03.387364       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:55:03.387485       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:55:03.387633       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:55:03.392846       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:55:03.394084       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:55:03.404251       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:55:03.406500       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:55:03.410798       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [32ca438e08c054b3e50b3233e1b81fce33c79d0787be9c3e7e3baab4e4734697] <==
	I1025 09:55:00.615636       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:55:00.683233       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:55:00.783460       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:55:00.783544       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1025 09:55:00.783657       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:55:00.802722       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:55:00.802790       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:55:00.808187       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:55:00.808614       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:55:00.808648       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:55:00.809940       1 config.go:200] "Starting service config controller"
	I1025 09:55:00.809966       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:55:00.809994       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:55:00.810002       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:55:00.810083       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:55:00.810110       1 config.go:309] "Starting node config controller"
	I1025 09:55:00.810123       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:55:00.810642       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:55:00.810111       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:55:00.910262       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:55:00.910292       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:55:00.911657       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [46c544af25ffafca1d729eb37ffa1959807879d6234f84e37186f47588ac6ec9] <==
	I1025 09:54:59.034665       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:54:59.986513       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:54:59.986551       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:54:59.986569       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:54:59.986578       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:55:00.018625       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:55:00.018657       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:55:00.022036       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:55:00.022172       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:55:00.025403       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:55:00.022195       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:55:00.125859       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:55:08 embed-certs-846915 kubelet[729]: I1025 09:55:08.277983     729 scope.go:117] "RemoveContainer" containerID="a1edd8879e0e1d8ae383eaa15e17a558c5783664d842da83d61a04670178ab73"
	Oct 25 09:55:08 embed-certs-846915 kubelet[729]: E1025 09:55:08.278156     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2np5t_kubernetes-dashboard(c5ecd8db-5f39-457d-bf4d-f7aa42eca965)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t" podUID="c5ecd8db-5f39-457d-bf4d-f7aa42eca965"
	Oct 25 09:55:09 embed-certs-846915 kubelet[729]: I1025 09:55:09.283562     729 scope.go:117] "RemoveContainer" containerID="a1edd8879e0e1d8ae383eaa15e17a558c5783664d842da83d61a04670178ab73"
	Oct 25 09:55:09 embed-certs-846915 kubelet[729]: E1025 09:55:09.283797     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2np5t_kubernetes-dashboard(c5ecd8db-5f39-457d-bf4d-f7aa42eca965)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t" podUID="c5ecd8db-5f39-457d-bf4d-f7aa42eca965"
	Oct 25 09:55:10 embed-certs-846915 kubelet[729]: I1025 09:55:10.288150     729 scope.go:117] "RemoveContainer" containerID="a1edd8879e0e1d8ae383eaa15e17a558c5783664d842da83d61a04670178ab73"
	Oct 25 09:55:10 embed-certs-846915 kubelet[729]: E1025 09:55:10.288333     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2np5t_kubernetes-dashboard(c5ecd8db-5f39-457d-bf4d-f7aa42eca965)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t" podUID="c5ecd8db-5f39-457d-bf4d-f7aa42eca965"
	Oct 25 09:55:10 embed-certs-846915 kubelet[729]: I1025 09:55:10.300038     729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ml7nd" podStartSLOduration=1.4659565749999999 podStartE2EDuration="7.300016843s" podCreationTimestamp="2025-10-25 09:55:03 +0000 UTC" firstStartedPulling="2025-10-25 09:55:04.290571174 +0000 UTC m=+7.153697662" lastFinishedPulling="2025-10-25 09:55:10.124631442 +0000 UTC m=+12.987757930" observedRunningTime="2025-10-25 09:55:10.30000701 +0000 UTC m=+13.163133516" watchObservedRunningTime="2025-10-25 09:55:10.300016843 +0000 UTC m=+13.163143350"
	Oct 25 09:55:21 embed-certs-846915 kubelet[729]: I1025 09:55:21.224787     729 scope.go:117] "RemoveContainer" containerID="a1edd8879e0e1d8ae383eaa15e17a558c5783664d842da83d61a04670178ab73"
	Oct 25 09:55:21 embed-certs-846915 kubelet[729]: I1025 09:55:21.317998     729 scope.go:117] "RemoveContainer" containerID="a1edd8879e0e1d8ae383eaa15e17a558c5783664d842da83d61a04670178ab73"
	Oct 25 09:55:21 embed-certs-846915 kubelet[729]: I1025 09:55:21.318262     729 scope.go:117] "RemoveContainer" containerID="f56266cc18663bb2732f3ce06d13ab1c16f202e5f7be88e521df304dc803fdc8"
	Oct 25 09:55:21 embed-certs-846915 kubelet[729]: E1025 09:55:21.318509     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2np5t_kubernetes-dashboard(c5ecd8db-5f39-457d-bf4d-f7aa42eca965)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t" podUID="c5ecd8db-5f39-457d-bf4d-f7aa42eca965"
	Oct 25 09:55:29 embed-certs-846915 kubelet[729]: I1025 09:55:29.100785     729 scope.go:117] "RemoveContainer" containerID="f56266cc18663bb2732f3ce06d13ab1c16f202e5f7be88e521df304dc803fdc8"
	Oct 25 09:55:29 embed-certs-846915 kubelet[729]: E1025 09:55:29.101056     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2np5t_kubernetes-dashboard(c5ecd8db-5f39-457d-bf4d-f7aa42eca965)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t" podUID="c5ecd8db-5f39-457d-bf4d-f7aa42eca965"
	Oct 25 09:55:31 embed-certs-846915 kubelet[729]: I1025 09:55:31.346692     729 scope.go:117] "RemoveContainer" containerID="0963c187a474d790c72b9c8390401140ff56882dd70e39b8e23c8ca7acaafd5c"
	Oct 25 09:55:43 embed-certs-846915 kubelet[729]: I1025 09:55:43.224856     729 scope.go:117] "RemoveContainer" containerID="f56266cc18663bb2732f3ce06d13ab1c16f202e5f7be88e521df304dc803fdc8"
	Oct 25 09:55:43 embed-certs-846915 kubelet[729]: I1025 09:55:43.381313     729 scope.go:117] "RemoveContainer" containerID="f56266cc18663bb2732f3ce06d13ab1c16f202e5f7be88e521df304dc803fdc8"
	Oct 25 09:55:43 embed-certs-846915 kubelet[729]: I1025 09:55:43.381670     729 scope.go:117] "RemoveContainer" containerID="377cbf4f2e049f820c8eb8e49438617564a5a9ffd5e124f133aa19b4702bde12"
	Oct 25 09:55:43 embed-certs-846915 kubelet[729]: E1025 09:55:43.381896     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2np5t_kubernetes-dashboard(c5ecd8db-5f39-457d-bf4d-f7aa42eca965)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t" podUID="c5ecd8db-5f39-457d-bf4d-f7aa42eca965"
	Oct 25 09:55:49 embed-certs-846915 kubelet[729]: I1025 09:55:49.100523     729 scope.go:117] "RemoveContainer" containerID="377cbf4f2e049f820c8eb8e49438617564a5a9ffd5e124f133aa19b4702bde12"
	Oct 25 09:55:49 embed-certs-846915 kubelet[729]: E1025 09:55:49.101217     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2np5t_kubernetes-dashboard(c5ecd8db-5f39-457d-bf4d-f7aa42eca965)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2np5t" podUID="c5ecd8db-5f39-457d-bf4d-f7aa42eca965"
	Oct 25 09:55:49 embed-certs-846915 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:55:49 embed-certs-846915 kubelet[729]: I1025 09:55:49.554167     729 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 25 09:55:49 embed-certs-846915 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:55:49 embed-certs-846915 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 25 09:55:49 embed-certs-846915 systemd[1]: kubelet.service: Consumed 1.727s CPU time.
	
	
	==> kubernetes-dashboard [586ed27083f1918f7a0180e22ca12263e87a4c0552578e80d52efc7dab81d226] <==
	2025/10/25 09:55:10 Using namespace: kubernetes-dashboard
	2025/10/25 09:55:10 Using in-cluster config to connect to apiserver
	2025/10/25 09:55:10 Using secret token for csrf signing
	2025/10/25 09:55:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:55:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:55:10 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 09:55:10 Generating JWE encryption key
	2025/10/25 09:55:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:55:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:55:10 Initializing JWE encryption key from synchronized object
	2025/10/25 09:55:10 Creating in-cluster Sidecar client
	2025/10/25 09:55:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:55:10 Serving insecurely on HTTP port: 9090
	2025/10/25 09:55:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:55:10 Starting overwatch
	
	
	==> storage-provisioner [0963c187a474d790c72b9c8390401140ff56882dd70e39b8e23c8ca7acaafd5c] <==
	I1025 09:55:00.582726       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:55:30.586767       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8fcb04b4201b14c458c49011837dbe7ebc093eadb439b95e7805d450e64ed33c] <==
	I1025 09:55:31.398313       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:55:31.404951       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:55:31.405372       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:55:31.407781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:34.863173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:39.123807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:42.722862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:45.776684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:48.799132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:48.803507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:55:48.803649       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:55:48.803817       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-846915_d7ebf65b-2913-4eb0-b547-b97a9481455a!
	I1025 09:55:48.803792       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"29bb7dfc-96d0-4f89-994b-0b96c89c26b8", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-846915_d7ebf65b-2913-4eb0-b547-b97a9481455a became leader
	W1025 09:55:48.806082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:48.809834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:55:48.904025       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-846915_d7ebf65b-2913-4eb0-b547-b97a9481455a!
	W1025 09:55:50.812531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:50.816461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:52.819943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:52.824824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-846915 -n embed-certs-846915
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-846915 -n embed-certs-846915: exit status 2 (334.878091ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-846915 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.97s)

                                                
                                    

Test pass (262/326)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 12.79
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 12.77
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.41
21 TestBinaryMirror 0.81
22 TestOffline 50.77
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 146.68
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 9.43
48 TestAddons/StoppedEnableDisable 16.72
49 TestCertOptions 32.89
50 TestCertExpiration 220.58
52 TestForceSystemdFlag 26.29
53 TestForceSystemdEnv 33.6
58 TestErrorSpam/setup 20.36
59 TestErrorSpam/start 0.67
60 TestErrorSpam/status 0.95
61 TestErrorSpam/pause 5.58
62 TestErrorSpam/unpause 5.45
63 TestErrorSpam/stop 18.1
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 37.84
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.35
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 14.32
75 TestFunctional/serial/CacheCmd/cache/add_local 1.82
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.97
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 47.62
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.2
86 TestFunctional/serial/LogsFileCmd 1.23
87 TestFunctional/serial/InvalidService 3.87
89 TestFunctional/parallel/ConfigCmd 0.44
90 TestFunctional/parallel/DashboardCmd 9.62
91 TestFunctional/parallel/DryRun 0.48
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 1.21
98 TestFunctional/parallel/AddonsCmd 0.21
99 TestFunctional/parallel/PersistentVolumeClaim 27.27
101 TestFunctional/parallel/SSHCmd 0.54
102 TestFunctional/parallel/CpCmd 1.75
103 TestFunctional/parallel/MySQL 18.33
104 TestFunctional/parallel/FileSync 0.31
105 TestFunctional/parallel/CertSync 1.7
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
113 TestFunctional/parallel/License 0.33
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 0.5
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.6
122 TestFunctional/parallel/ImageCommands/Setup 1.82
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.42
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.21
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
145 TestFunctional/parallel/ProfileCmd/profile_list 0.48
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
147 TestFunctional/parallel/MountCmd/any-port 8.13
148 TestFunctional/parallel/MountCmd/specific-port 1.73
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.62
150 TestFunctional/parallel/ServiceCmd/List 1.71
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.7
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 144.76
163 TestMultiControlPlane/serial/DeployApp 5.24
164 TestMultiControlPlane/serial/PingHostFromPods 1.02
165 TestMultiControlPlane/serial/AddWorkerNode 23.12
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
168 TestMultiControlPlane/serial/CopyFile 17.13
169 TestMultiControlPlane/serial/StopSecondaryNode 13.31
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.62
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.9
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 108.58
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.55
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
176 TestMultiControlPlane/serial/StopCluster 47.22
177 TestMultiControlPlane/serial/RestartCluster 57.24
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
179 TestMultiControlPlane/serial/AddSecondaryNode 63.77
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
184 TestJSONOutput/start/Command 38.8
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 6.12
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.23
209 TestKicCustomNetwork/create_custom_network 34.12
210 TestKicCustomNetwork/use_default_bridge_network 26.43
211 TestKicExistingNetwork 23.3
212 TestKicCustomSubnet 25.05
213 TestKicStaticIP 27.79
214 TestMainNoArgs 0.06
215 TestMinikubeProfile 48.92
218 TestMountStart/serial/StartWithMountFirst 6.38
219 TestMountStart/serial/VerifyMountFirst 0.27
220 TestMountStart/serial/StartWithMountSecond 8.59
221 TestMountStart/serial/VerifyMountSecond 0.27
222 TestMountStart/serial/DeleteFirst 1.72
223 TestMountStart/serial/VerifyMountPostDelete 0.27
224 TestMountStart/serial/Stop 1.27
225 TestMountStart/serial/RestartStopped 7.78
226 TestMountStart/serial/VerifyMountPostStop 0.27
229 TestMultiNode/serial/FreshStart2Nodes 96.53
230 TestMultiNode/serial/DeployApp2Nodes 4.71
231 TestMultiNode/serial/PingHostFrom2Pods 0.71
232 TestMultiNode/serial/AddNode 23.64
233 TestMultiNode/serial/MultiNodeLabels 0.06
234 TestMultiNode/serial/ProfileList 0.66
235 TestMultiNode/serial/CopyFile 9.86
236 TestMultiNode/serial/StopNode 2.26
237 TestMultiNode/serial/StartAfterStop 7.92
238 TestMultiNode/serial/RestartKeepsNodes 81.66
239 TestMultiNode/serial/DeleteNode 5.23
240 TestMultiNode/serial/StopMultiNode 30.33
241 TestMultiNode/serial/RestartMultiNode 27.57
242 TestMultiNode/serial/ValidateNameConflict 23.4
249 TestScheduledStopUnix 96.12
252 TestInsufficientStorage 9.44
253 TestRunningBinaryUpgrade 50.05
255 TestKubernetesUpgrade 300.47
256 TestMissingContainerUpgrade 72.21
258 TestPause/serial/Start 48.74
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
261 TestNoKubernetes/serial/StartWithK8s 32.61
262 TestNoKubernetes/serial/StartWithStopK8s 17.85
263 TestPause/serial/SecondStartNoReconfiguration 7.15
264 TestNoKubernetes/serial/Start 6.88
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
267 TestNoKubernetes/serial/ProfileList 2.02
268 TestNoKubernetes/serial/Stop 1.31
269 TestNoKubernetes/serial/StartNoArgs 7.52
277 TestNetworkPlugins/group/false 4.1
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
282 TestStoppedBinaryUpgrade/Setup 2.68
283 TestStoppedBinaryUpgrade/Upgrade 67.92
284 TestStoppedBinaryUpgrade/MinikubeLogs 1.03
292 TestNetworkPlugins/group/auto/Start 36.78
293 TestNetworkPlugins/group/kindnet/Start 70.25
294 TestNetworkPlugins/group/auto/KubeletFlags 0.32
295 TestNetworkPlugins/group/auto/NetCatPod 8.23
296 TestNetworkPlugins/group/calico/Start 51.75
297 TestNetworkPlugins/group/auto/DNS 0.14
298 TestNetworkPlugins/group/auto/Localhost 0.1
299 TestNetworkPlugins/group/auto/HairPin 0.1
300 TestNetworkPlugins/group/custom-flannel/Start 53.2
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/calico/ControllerPod 6.01
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
304 TestNetworkPlugins/group/kindnet/NetCatPod 12.22
305 TestNetworkPlugins/group/calico/KubeletFlags 0.31
306 TestNetworkPlugins/group/calico/NetCatPod 13.2
307 TestNetworkPlugins/group/kindnet/DNS 0.11
308 TestNetworkPlugins/group/kindnet/Localhost 0.08
309 TestNetworkPlugins/group/kindnet/HairPin 0.09
310 TestNetworkPlugins/group/calico/DNS 0.19
311 TestNetworkPlugins/group/calico/Localhost 0.1
312 TestNetworkPlugins/group/calico/HairPin 0.09
313 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
314 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.17
315 TestNetworkPlugins/group/custom-flannel/DNS 0.13
316 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
317 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
318 TestNetworkPlugins/group/enable-default-cni/Start 71.68
319 TestNetworkPlugins/group/flannel/Start 49.93
320 TestNetworkPlugins/group/bridge/Start 38.99
321 TestNetworkPlugins/group/flannel/ControllerPod 6.01
322 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
323 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
324 TestNetworkPlugins/group/bridge/NetCatPod 9.19
325 TestNetworkPlugins/group/flannel/NetCatPod 9.19
326 TestNetworkPlugins/group/bridge/DNS 0.11
327 TestNetworkPlugins/group/bridge/Localhost 0.1
328 TestNetworkPlugins/group/bridge/HairPin 0.1
329 TestNetworkPlugins/group/flannel/DNS 0.12
330 TestNetworkPlugins/group/flannel/Localhost 0.1
331 TestNetworkPlugins/group/flannel/HairPin 0.09
332 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
333 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.17
334 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
335 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
336 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
338 TestStartStop/group/old-k8s-version/serial/FirstStart 51.52
340 TestStartStop/group/no-preload/serial/FirstStart 64.93
342 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 74.8
344 TestStartStop/group/newest-cni/serial/FirstStart 30.85
345 TestStartStop/group/newest-cni/serial/DeployApp 0
347 TestStartStop/group/newest-cni/serial/Stop 8.2
348 TestStartStop/group/old-k8s-version/serial/DeployApp 9.26
349 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
350 TestStartStop/group/newest-cni/serial/SecondStart 10.93
352 TestStartStop/group/old-k8s-version/serial/Stop 17.51
353 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
354 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
357 TestStartStop/group/no-preload/serial/DeployApp 8.23
360 TestStartStop/group/embed-certs/serial/FirstStart 39.97
361 TestStartStop/group/no-preload/serial/Stop 16.4
362 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
363 TestStartStop/group/old-k8s-version/serial/SecondStart 50.59
364 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
366 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.3
367 TestStartStop/group/no-preload/serial/SecondStart 24.93
368 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.98
369 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
370 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.21
371 TestStartStop/group/embed-certs/serial/DeployApp 10.28
372 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
373 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
375 TestStartStop/group/embed-certs/serial/Stop 18.11
376 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
378 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
379 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
380 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
382 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
383 TestStartStop/group/embed-certs/serial/SecondStart 47.42
384 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
385 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
386 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
388 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
389 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
390 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
x
+
TestDownloadOnly/v1.28.0/json-events (12.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-873386 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-873386 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.790258748s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (12.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1025 08:59:10.024791  134145 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1025 08:59:10.024884  134145 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-873386
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-873386: exit status 85 (72.535453ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-873386 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-873386 │ jenkins │ v1.37.0 │ 25 Oct 25 08:58 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 08:58:57
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 08:58:57.285641  134157 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:58:57.285904  134157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:58:57.285916  134157 out.go:374] Setting ErrFile to fd 2...
	I1025 08:58:57.285921  134157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:58:57.286159  134157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	W1025 08:58:57.286324  134157 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21794-130604/.minikube/config/config.json: open /home/jenkins/minikube-integration/21794-130604/.minikube/config/config.json: no such file or directory
	I1025 08:58:57.286885  134157 out.go:368] Setting JSON to true
	I1025 08:58:57.287782  134157 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2481,"bootTime":1761380256,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 08:58:57.287887  134157 start.go:141] virtualization: kvm guest
	I1025 08:58:57.289880  134157 out.go:99] [download-only-873386] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 08:58:57.290025  134157 notify.go:220] Checking for updates...
	W1025 08:58:57.290029  134157 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 08:58:57.291402  134157 out.go:171] MINIKUBE_LOCATION=21794
	I1025 08:58:57.292761  134157 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:58:57.293956  134157 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 08:58:57.295188  134157 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 08:58:57.296362  134157 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1025 08:58:57.298489  134157 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 08:58:57.298800  134157 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:58:57.321556  134157 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 08:58:57.321701  134157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:58:57.378933  134157 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-25 08:58:57.36849271 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:58:57.379041  134157 docker.go:318] overlay module found
	I1025 08:58:57.380609  134157 out.go:99] Using the docker driver based on user configuration
	I1025 08:58:57.380644  134157 start.go:305] selected driver: docker
	I1025 08:58:57.380651  134157 start.go:925] validating driver "docker" against <nil>
	I1025 08:58:57.380735  134157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:58:57.437833  134157 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-25 08:58:57.428182536 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:58:57.437999  134157 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 08:58:57.438507  134157 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1025 08:58:57.438677  134157 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 08:58:57.440383  134157 out.go:171] Using Docker driver with root privileges
	I1025 08:58:57.441428  134157 cni.go:84] Creating CNI manager for ""
	I1025 08:58:57.441522  134157 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:58:57.441541  134157 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 08:58:57.441650  134157 start.go:349] cluster config:
	{Name:download-only-873386 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-873386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:58:57.442776  134157 out.go:99] Starting "download-only-873386" primary control-plane node in "download-only-873386" cluster
	I1025 08:58:57.442796  134157 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 08:58:57.443775  134157 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1025 08:58:57.443799  134157 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 08:58:57.443938  134157 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 08:58:57.460454  134157 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 08:58:57.460630  134157 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 08:58:57.460724  134157 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 08:58:57.535600  134157 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1025 08:58:57.535645  134157 cache.go:58] Caching tarball of preloaded images
	I1025 08:58:57.535818  134157 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 08:58:57.537577  134157 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1025 08:58:57.537596  134157 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1025 08:58:57.637149  134157 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1025 08:58:57.637265  134157 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1025 08:59:01.563861  134157 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	
	
	* The control-plane node download-only-873386 host does not exist
	  To start a cluster, run: "minikube start -p download-only-873386"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-873386
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (12.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-624042 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-624042 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.766669648s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (12.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1025 08:59:23.233679  134145 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1025 08:59:23.233727  134145 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-624042
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-624042: exit status 85 (73.328315ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-873386 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-873386 │ jenkins │ v1.37.0 │ 25 Oct 25 08:58 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ delete  │ -p download-only-873386                                                                                                                                                   │ download-only-873386 │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ start   │ -o=json --download-only -p download-only-624042 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-624042 │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 08:59:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 08:59:10.518042  134541 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:59:10.518325  134541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:59:10.518336  134541 out.go:374] Setting ErrFile to fd 2...
	I1025 08:59:10.518340  134541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:59:10.518603  134541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 08:59:10.519130  134541 out.go:368] Setting JSON to true
	I1025 08:59:10.520036  134541 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2494,"bootTime":1761380256,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 08:59:10.520132  134541 start.go:141] virtualization: kvm guest
	I1025 08:59:10.521964  134541 out.go:99] [download-only-624042] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 08:59:10.522133  134541 notify.go:220] Checking for updates...
	I1025 08:59:10.523281  134541 out.go:171] MINIKUBE_LOCATION=21794
	I1025 08:59:10.524668  134541 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:59:10.526408  134541 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 08:59:10.527568  134541 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 08:59:10.528750  134541 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1025 08:59:10.530950  134541 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 08:59:10.531171  134541 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:59:10.553422  134541 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 08:59:10.553508  134541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:59:10.608987  134541 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-25 08:59:10.599654725 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:59:10.609083  134541 docker.go:318] overlay module found
	I1025 08:59:10.610722  134541 out.go:99] Using the docker driver based on user configuration
	I1025 08:59:10.610752  134541 start.go:305] selected driver: docker
	I1025 08:59:10.610759  134541 start.go:925] validating driver "docker" against <nil>
	I1025 08:59:10.610849  134541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:59:10.670231  134541 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-25 08:59:10.659885783 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 08:59:10.670407  134541 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 08:59:10.670846  134541 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1025 08:59:10.671010  134541 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 08:59:10.672782  134541 out.go:171] Using Docker driver with root privileges
	I1025 08:59:10.673901  134541 cni.go:84] Creating CNI manager for ""
	I1025 08:59:10.673965  134541 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:59:10.673975  134541 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 08:59:10.674037  134541 start.go:349] cluster config:
	{Name:download-only-624042 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-624042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:59:10.675190  134541 out.go:99] Starting "download-only-624042" primary control-plane node in "download-only-624042" cluster
	I1025 08:59:10.675214  134541 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 08:59:10.676303  134541 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1025 08:59:10.676328  134541 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:59:10.676407  134541 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 08:59:10.693081  134541 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 08:59:10.693261  134541 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 08:59:10.693279  134541 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1025 08:59:10.693283  134541 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1025 08:59:10.693291  134541 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1025 08:59:10.843939  134541 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 08:59:10.843992  134541 cache.go:58] Caching tarball of preloaded images
	I1025 08:59:10.844265  134541 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:59:10.846014  134541 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1025 08:59:10.846056  134541 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1025 08:59:10.942796  134541 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1025 08:59:10.942847  134541 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21794-130604/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-624042 host does not exist
	  To start a cluster, run: "minikube start -p download-only-624042"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-624042
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-376173 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-376173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-376173
--- PASS: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
I1025 08:59:24.376659  134145 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-059821 --alsologtostderr --binary-mirror http://127.0.0.1:34279 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-059821" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-059821
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestOffline (50.77s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-173316 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-173316 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (48.133662178s)
helpers_test.go:175: Cleaning up "offline-crio-173316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-173316
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-173316: (2.636959267s)
--- PASS: TestOffline (50.77s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-273872
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-273872: exit status 85 (64.110794ms)

                                                
                                                
-- stdout --
	* Profile "addons-273872" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-273872"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-273872
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-273872: exit status 85 (64.264785ms)

                                                
                                                
-- stdout --
	* Profile "addons-273872" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-273872"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (146.68s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-273872 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-273872 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m26.681551979s)
--- PASS: TestAddons/Setup (146.68s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-273872 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-273872 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.43s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-273872 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-273872 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [aa24d212-b05e-42d4-9f1c-f48910024818] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [aa24d212-b05e-42d4-9f1c-f48910024818] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003514993s
addons_test.go:694: (dbg) Run:  kubectl --context addons-273872 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-273872 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-273872 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.43s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.72s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-273872
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-273872: (16.431679286s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-273872
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-273872
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-273872
--- PASS: TestAddons/StoppedEnableDisable (16.72s)

                                                
                                    
x
+
TestCertOptions (32.89s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-203937 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-203937 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (26.797579154s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-203937 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-203937 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-203937 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-203937" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-203937
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-203937: (5.422930774s)
--- PASS: TestCertOptions (32.89s)

                                                
                                    
x
+
TestCertExpiration (220.58s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-225615 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-225615 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (30.96473236s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-225615 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-225615 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.982352501s)
helpers_test.go:175: Cleaning up "cert-expiration-225615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-225615
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-225615: (2.634721247s)
--- PASS: TestCertExpiration (220.58s)

                                                
                                    
x
+
TestForceSystemdFlag (26.29s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-170120 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-170120 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.153489385s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-170120 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-170120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-170120
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-170120: (3.814328544s)
--- PASS: TestForceSystemdFlag (26.29s)

                                                
                                    
x
+
TestForceSystemdEnv (33.6s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-683405 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-683405 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.491949757s)
helpers_test.go:175: Cleaning up "force-systemd-env-683405" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-683405
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-683405: (4.106096791s)
--- PASS: TestForceSystemdEnv (33.60s)

                                                
                                    
x
+
TestErrorSpam/setup (20.36s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-643049 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-643049 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-643049 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-643049 --driver=docker  --container-runtime=crio: (20.363170435s)
--- PASS: TestErrorSpam/setup (20.36s)

                                                
                                    
x
+
TestErrorSpam/start (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 start --dry-run
--- PASS: TestErrorSpam/start (0.67s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (5.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 pause: exit status 80 (1.935390105s)

                                                
                                                
-- stdout --
	* Pausing node nospam-643049 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:05:17Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 pause: exit status 80 (2.011649478s)

                                                
                                                
-- stdout --
	* Pausing node nospam-643049 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:05:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 pause: exit status 80 (1.629897764s)

                                                
                                                
-- stdout --
	* Pausing node nospam-643049 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:05:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.45s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 unpause: exit status 80 (2.163191848s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-643049 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:05:22Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 unpause: exit status 80 (1.714529498s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-643049 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:05:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 unpause: exit status 80 (1.570669848s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-643049 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:05:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.45s)

                                                
                                    
x
+
TestErrorSpam/stop (18.1s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 stop: (17.899733725s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643049 --log_dir /tmp/nospam-643049 stop
--- PASS: TestErrorSpam/stop (18.10s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21794-130604/.minikube/files/etc/test/nested/copy/134145/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.84s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-063906 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-063906 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (37.841740008s)
--- PASS: TestFunctional/serial/StartWithProxy (37.84s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1025 09:06:25.994675  134145 config.go:182] Loaded profile config "functional-063906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-063906 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-063906 --alsologtostderr -v=8: (6.352992784s)
functional_test.go:678: soft start took 6.353729964s for "functional-063906" cluster.
I1025 09:06:32.348037  134145 config.go:182] Loaded profile config "functional-063906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.35s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-063906 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (14.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-063906 cache add registry.k8s.io/pause:3.1: (4.92066664s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-063906 cache add registry.k8s.io/pause:3.3: (1.051917443s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-063906 cache add registry.k8s.io/pause:latest: (8.346284186s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (14.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-063906 /tmp/TestFunctionalserialCacheCmdcacheadd_local1796544235/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 cache add minikube-local-cache-test:functional-063906
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-063906 cache add minikube-local-cache-test:functional-063906: (1.478594522s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 cache delete minikube-local-cache-test:functional-063906
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-063906
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-063906 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (289.503465ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-063906 cache reload: (1.074766098s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 kubectl -- --context functional-063906 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-063906 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.62s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-063906 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1025 09:06:52.509686  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:06:52.516172  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:06:52.527654  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:06:52.549076  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:06:52.590598  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:06:52.672183  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:06:52.833749  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:06:53.155487  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:06:53.797538  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:06:55.079534  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:06:57.642503  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:07:02.763938  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:07:13.005619  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:07:33.487853  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-063906 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.620028235s)
functional_test.go:776: restart took 47.620177777s for "functional-063906" cluster.
I1025 09:07:39.009254  134145 config.go:182] Loaded profile config "functional-063906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (47.62s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-063906 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-063906 logs: (1.203210151s)
--- PASS: TestFunctional/serial/LogsCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 logs --file /tmp/TestFunctionalserialLogsFileCmd1663647345/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-063906 logs --file /tmp/TestFunctionalserialLogsFileCmd1663647345/001/logs.txt: (1.226843872s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.87s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-063906 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-063906
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-063906: exit status 115 (347.743946ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31067 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-063906 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.87s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-063906 config get cpus: exit status 14 (81.775294ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-063906 config get cpus: exit status 14 (78.651966ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-063906 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-063906 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 173895: os: process already finished
E1025 09:09:36.371284  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:11:52.510643  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:12:20.213261  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:16:52.510254  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/DashboardCmd (9.62s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-063906 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-063906 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (226.483547ms)

                                                
                                                
-- stdout --
	* [functional-063906] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21794
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:08:12.902667  173128 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:08:12.902959  173128 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:08:12.902970  173128 out.go:374] Setting ErrFile to fd 2...
	I1025 09:08:12.902976  173128 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:08:12.903171  173128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:08:12.903644  173128 out.go:368] Setting JSON to false
	I1025 09:08:12.904637  173128 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3037,"bootTime":1761380256,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:08:12.904727  173128 start.go:141] virtualization: kvm guest
	I1025 09:08:12.906689  173128 out.go:179] * [functional-063906] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:08:12.907804  173128 notify.go:220] Checking for updates...
	I1025 09:08:12.907824  173128 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:08:12.908931  173128 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:08:12.910006  173128 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:08:12.911287  173128 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:08:12.912486  173128 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:08:12.913521  173128 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:08:12.915154  173128 config.go:182] Loaded profile config "functional-063906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:08:12.915852  173128 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:08:12.942451  173128 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:08:12.942544  173128 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:08:13.010012  173128 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-25 09:08:12.996839761 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:08:13.010129  173128 docker.go:318] overlay module found
	I1025 09:08:13.035339  173128 out.go:179] * Using the docker driver based on existing profile
	I1025 09:08:13.037458  173128 start.go:305] selected driver: docker
	I1025 09:08:13.037485  173128 start.go:925] validating driver "docker" against &{Name:functional-063906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-063906 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:08:13.037789  173128 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:08:13.053502  173128 out.go:203] 
	W1025 09:08:13.068947  173128 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 09:08:13.070270  173128 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-063906 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-063906 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-063906 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (174.486383ms)

                                                
                                                
-- stdout --
	* [functional-063906] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21794
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:08:13.097093  173222 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:08:13.097377  173222 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:08:13.097387  173222 out.go:374] Setting ErrFile to fd 2...
	I1025 09:08:13.097391  173222 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:08:13.097704  173222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:08:13.098146  173222 out.go:368] Setting JSON to false
	I1025 09:08:13.099181  173222 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3037,"bootTime":1761380256,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:08:13.099279  173222 start.go:141] virtualization: kvm guest
	I1025 09:08:13.101122  173222 out.go:179] * [functional-063906] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1025 09:08:13.102295  173222 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:08:13.102357  173222 notify.go:220] Checking for updates...
	I1025 09:08:13.104387  173222 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:08:13.105880  173222 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:08:13.106919  173222 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:08:13.107908  173222 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:08:13.108891  173222 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:08:13.110213  173222 config.go:182] Loaded profile config "functional-063906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:08:13.110729  173222 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:08:13.135747  173222 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:08:13.135903  173222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:08:13.197159  173222 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-25 09:08:13.187340605 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:08:13.197301  173222 docker.go:318] overlay module found
	I1025 09:08:13.201767  173222 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1025 09:08:13.202889  173222 start.go:305] selected driver: docker
	I1025 09:08:13.202917  173222 start.go:925] validating driver "docker" against &{Name:functional-063906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-063906 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:08:13.203052  173222 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:08:13.205026  173222 out.go:203] 
	W1025 09:08:13.206088  173222 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 09:08:13.207115  173222 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [8ec952cd-a56d-4c4f-93ab-c41a9cf3a433] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004529127s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-063906 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-063906 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-063906 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-063906 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [caa1f865-0163-4c9a-93dc-fdf1b6cabc6f] Pending
helpers_test.go:352: "sp-pod" [caa1f865-0163-4c9a-93dc-fdf1b6cabc6f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [caa1f865-0163-4c9a-93dc-fdf1b6cabc6f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004425436s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-063906 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-063906 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-063906 apply -f testdata/storage-provisioner/pod.yaml
I1025 09:08:03.561629  134145 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [6fe4a1d3-2190-49e1-aa74-5e225a6c05e2] Pending
helpers_test.go:352: "sp-pod" [6fe4a1d3-2190-49e1-aa74-5e225a6c05e2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [6fe4a1d3-2190-49e1-aa74-5e225a6c05e2] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.002941722s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-063906 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.27s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh -n functional-063906 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 cp functional-063906:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1772710810/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh -n functional-063906 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh -n functional-063906 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (18.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-063906 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-gmnks" [3ed9d61a-d33f-46d8-93ba-9125d65a17cd] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-gmnks" [3ed9d61a-d33f-46d8-93ba-9125d65a17cd] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.003373682s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-063906 exec mysql-5bb876957f-gmnks -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-063906 exec mysql-5bb876957f-gmnks -- mysql -ppassword -e "show databases;": exit status 1 (87.996816ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1025 09:08:10.030730  134145 retry.go:31] will retry after 684.727854ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-063906 exec mysql-5bb876957f-gmnks -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-063906 exec mysql-5bb876957f-gmnks -- mysql -ppassword -e "show databases;": exit status 1 (93.569643ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1025 09:08:10.810263  134145 retry.go:31] will retry after 2.117117999s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-063906 exec mysql-5bb876957f-gmnks -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (18.33s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/134145/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "sudo cat /etc/test/nested/copy/134145/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/134145.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "sudo cat /etc/ssl/certs/134145.pem"
I1025 09:07:52.765551  134145 detect.go:223] nested VM detected
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/134145.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "sudo cat /usr/share/ca-certificates/134145.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1341452.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "sudo cat /etc/ssl/certs/1341452.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1341452.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "sudo cat /usr/share/ca-certificates/1341452.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-063906 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-063906 ssh "sudo systemctl is-active docker": exit status 1 (304.771222ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-063906 ssh "sudo systemctl is-active containerd": exit status 1 (291.143508ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 image ls --format short --alsologtostderr
E1025 09:08:14.449172  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-063906 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-063906 image ls --format short --alsologtostderr:
I1025 09:08:14.281963  173862 out.go:360] Setting OutFile to fd 1 ...
I1025 09:08:14.282244  173862 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:08:14.282254  173862 out.go:374] Setting ErrFile to fd 2...
I1025 09:08:14.282261  173862 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:08:14.282503  173862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
I1025 09:08:14.283169  173862 config.go:182] Loaded profile config "functional-063906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:08:14.283301  173862 config.go:182] Loaded profile config "functional-063906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:08:14.283775  173862 cli_runner.go:164] Run: docker container inspect functional-063906 --format={{.State.Status}}
I1025 09:08:14.307292  173862 ssh_runner.go:195] Run: systemctl --version
I1025 09:08:14.307541  173862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-063906
I1025 09:08:14.333341  173862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/functional-063906/id_rsa Username:docker}
I1025 09:08:14.436294  173862 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-063906 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ alpine             │ 5e7abcdd20216 │ 54.2MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                      │ functional-063906  │ ce125be9a01a5 │ 1.47MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ latest             │ 657fdcd1c3659 │ 155MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-063906 image ls --format table --alsologtostderr:
I1025 09:08:18.704417  174667 out.go:360] Setting OutFile to fd 1 ...
I1025 09:08:18.704732  174667 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:08:18.704745  174667 out.go:374] Setting ErrFile to fd 2...
I1025 09:08:18.704752  174667 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:08:18.705000  174667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
I1025 09:08:18.705802  174667 config.go:182] Loaded profile config "functional-063906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:08:18.705951  174667 config.go:182] Loaded profile config "functional-063906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:08:18.706493  174667 cli_runner.go:164] Run: docker container inspect functional-063906 --format={{.State.Status}}
I1025 09:08:18.728848  174667 ssh_runner.go:195] Run: systemctl --version
I1025 09:08:18.728908  174667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-063906
I1025 09:08:18.751330  174667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/functional-063906/id_rsa Username:docker}
I1025 09:08:18.859789  174667 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-063906 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad04
5384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd27778
7b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"9bba0c0a4b32c10e12864ca1a9e0df9b794fafe206379df1f06ca491245c770f","repoDigests":["docker.io/library/0c7a3c239bd247db426911839b9d88ece4a671b5fdf98fb536de08c70a262d1f-tmp@sha256:3930d6a8fd3726cbae90204ee34a740534c76fe7651f1ce
ec1ca98cb49b86f0a"],"repoTags":[],"size":"1466132"},{"id":"5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":["docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22","docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54168570"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b501620
9e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"ce125be9a01a517618d58e921655c4414851eb1fd18c3c2d6b2e1bbc1eb6b568","repoDigests":["localhost/my-image@sha256:bcad5524a3c7ca650daf926ae41de4fec3d19c3d9d04c524786692e7c188df23"],"repoTags":["localhost/my-image:functional-063906"],"size":"1468744"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb35
00"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b1
64e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab","repoDigests":["docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903","docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8"],"repoTags":["docker.io/library/nginx:latest"],"size":"155467611"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-063906 image ls --format json --alsologtostderr:
I1025 09:08:18.436120  174615 out.go:360] Setting OutFile to fd 1 ...
I1025 09:08:18.436446  174615 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:08:18.436459  174615 out.go:374] Setting ErrFile to fd 2...
I1025 09:08:18.436467  174615 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:08:18.436761  174615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
I1025 09:08:18.437597  174615 config.go:182] Loaded profile config "functional-063906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:08:18.437768  174615 config.go:182] Loaded profile config "functional-063906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:08:18.438395  174615 cli_runner.go:164] Run: docker container inspect functional-063906 --format={{.State.Status}}
I1025 09:08:18.461602  174615 ssh_runner.go:195] Run: systemctl --version
I1025 09:08:18.461674  174615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-063906
I1025 09:08:18.484525  174615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/functional-063906/id_rsa Username:docker}
I1025 09:08:18.591895  174615 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-063906 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 9bba0c0a4b32c10e12864ca1a9e0df9b794fafe206379df1f06ca491245c770f
repoDigests:
- docker.io/library/0c7a3c239bd247db426911839b9d88ece4a671b5fdf98fb536de08c70a262d1f-tmp@sha256:3930d6a8fd3726cbae90204ee34a740534c76fe7651f1ceec1ca98cb49b86f0a
repoTags: []
size: "1466132"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1462480"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests:
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
- docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e
repoTags:
- docker.io/library/nginx:alpine
size: "54168570"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab
repoDigests:
- docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903
- docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8
repoTags:
- docker.io/library/nginx:latest
size: "155467611"
- id: ce125be9a01a517618d58e921655c4414851eb1fd18c3c2d6b2e1bbc1eb6b568
repoDigests:
- localhost/my-image@sha256:bcad5524a3c7ca650daf926ae41de4fec3d19c3d9d04c524786692e7c188df23
repoTags:
- localhost/my-image:functional-063906
size: "1468744"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-063906 image ls --format yaml --alsologtostderr:
I1025 09:08:18.154947  174563 out.go:360] Setting OutFile to fd 1 ...
I1025 09:08:18.155289  174563 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:08:18.155303  174563 out.go:374] Setting ErrFile to fd 2...
I1025 09:08:18.155310  174563 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:08:18.155710  174563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
I1025 09:08:18.156638  174563 config.go:182] Loaded profile config "functional-063906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:08:18.156786  174563 config.go:182] Loaded profile config "functional-063906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:08:18.157433  174563 cli_runner.go:164] Run: docker container inspect functional-063906 --format={{.State.Status}}
I1025 09:08:18.181727  174563 ssh_runner.go:195] Run: systemctl --version
I1025 09:08:18.181798  174563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-063906
I1025 09:08:18.203486  174563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/functional-063906/id_rsa Username:docker}
I1025 09:08:18.315117  174563 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-063906 ssh pgrep buildkitd: exit status 1 (279.845224ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 image build -t localhost/my-image:functional-063906 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-063906 image build -t localhost/my-image:functional-063906 testdata/build --alsologtostderr: (3.068660952s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-063906 image build -t localhost/my-image:functional-063906 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9bba0c0a4b3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-063906
--> ce125be9a01
Successfully tagged localhost/my-image:functional-063906
ce125be9a01a517618d58e921655c4414851eb1fd18c3c2d6b2e1bbc1eb6b568
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-063906 image build -t localhost/my-image:functional-063906 testdata/build --alsologtostderr:
I1025 09:08:14.808127  174067 out.go:360] Setting OutFile to fd 1 ...
I1025 09:08:14.808424  174067 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:08:14.808434  174067 out.go:374] Setting ErrFile to fd 2...
I1025 09:08:14.808438  174067 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:08:14.808636  174067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
I1025 09:08:14.809201  174067 config.go:182] Loaded profile config "functional-063906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:08:14.809860  174067 config.go:182] Loaded profile config "functional-063906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:08:14.810222  174067 cli_runner.go:164] Run: docker container inspect functional-063906 --format={{.State.Status}}
I1025 09:08:14.828065  174067 ssh_runner.go:195] Run: systemctl --version
I1025 09:08:14.828108  174067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-063906
I1025 09:08:14.844610  174067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/functional-063906/id_rsa Username:docker}
I1025 09:08:14.942920  174067 build_images.go:161] Building image from path: /tmp/build.3420461088.tar
I1025 09:08:14.942974  174067 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1025 09:08:14.950736  174067 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3420461088.tar
I1025 09:08:14.954335  174067 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3420461088.tar: stat -c "%s %y" /var/lib/minikube/build/build.3420461088.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3420461088.tar': No such file or directory
I1025 09:08:14.954376  174067 ssh_runner.go:362] scp /tmp/build.3420461088.tar --> /var/lib/minikube/build/build.3420461088.tar (3072 bytes)
I1025 09:08:14.972003  174067 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3420461088
I1025 09:08:14.979562  174067 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3420461088 -xf /var/lib/minikube/build/build.3420461088.tar
I1025 09:08:14.987474  174067 crio.go:315] Building image: /var/lib/minikube/build/build.3420461088
I1025 09:08:14.987542  174067 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-063906 /var/lib/minikube/build/build.3420461088 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1025 09:08:17.799894  174067 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-063906 /var/lib/minikube/build/build.3420461088 --cgroup-manager=cgroupfs: (2.81232431s)
I1025 09:08:17.799971  174067 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3420461088
I1025 09:08:17.808053  174067 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3420461088.tar
I1025 09:08:17.815529  174067 build_images.go:217] Built localhost/my-image:functional-063906 from /tmp/build.3420461088.tar
I1025 09:08:17.815558  174067 build_images.go:133] succeeded building to: functional-063906
I1025 09:08:17.815563  174067 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.79874143s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-063906
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-063906 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-063906 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-063906 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-063906 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 167293: os: process already finished
helpers_test.go:519: unable to terminate pid 167114: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-063906 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-063906 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [427c40f1-1cae-4019-b1f5-aa9d0a018288] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [427c40f1-1cae-4019-b1f5-aa9d0a018288] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003865453s
I1025 09:07:58.309452  134145 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 image rm kicbase/echo-server:functional-063906 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 update-context --alsologtostderr -v=2
2025/10/25 09:08:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-063906 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.211.194 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-063906 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "397.182217ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "79.40199ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "398.050784ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "82.467697ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-063906 /tmp/TestFunctionalparallelMountCmdany-port1531448523/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761383281372258709" to /tmp/TestFunctionalparallelMountCmdany-port1531448523/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761383281372258709" to /tmp/TestFunctionalparallelMountCmdany-port1531448523/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761383281372258709" to /tmp/TestFunctionalparallelMountCmdany-port1531448523/001/test-1761383281372258709
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-063906 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (345.617967ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 09:08:01.718265  134145 retry.go:31] will retry after 527.175253ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 25 09:08 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 25 09:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 25 09:08 test-1761383281372258709
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh cat /mount-9p/test-1761383281372258709
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-063906 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [0ec3a527-524a-499e-a3fd-64c6f078a67b] Pending
helpers_test.go:352: "busybox-mount" [0ec3a527-524a-499e-a3fd-64c6f078a67b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [0ec3a527-524a-499e-a3fd-64c6f078a67b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [0ec3a527-524a-499e-a3fd-64c6f078a67b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003045212s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-063906 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-063906 /tmp/TestFunctionalparallelMountCmdany-port1531448523/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-063906 /tmp/TestFunctionalparallelMountCmdspecific-port725202427/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-063906 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (284.914354ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 09:08:09.787110  134145 retry.go:31] will retry after 408.016286ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-063906 /tmp/TestFunctionalparallelMountCmdspecific-port725202427/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-063906 ssh "sudo umount -f /mount-9p": exit status 1 (277.441677ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-063906 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-063906 /tmp/TestFunctionalparallelMountCmdspecific-port725202427/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-063906 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3519995462/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-063906 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3519995462/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-063906 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3519995462/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-063906 ssh "findmnt -T" /mount1: exit status 1 (367.898479ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 09:08:11.600281  134145 retry.go:31] will retry after 377.347385ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-063906 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-063906 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3519995462/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-063906 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3519995462/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-063906 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3519995462/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-063906 service list: (1.706232899s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-063906 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-063906 service list -o json: (1.703267968s)
functional_test.go:1504: Took "1.703361539s" to run "out/minikube-linux-amd64 -p functional-063906 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-063906
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-063906
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-063906
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (144.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-102727 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m24.040707358s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (144.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-102727 kubectl -- rollout status deployment/busybox: (3.207008027s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 kubectl -- exec busybox-7b57f96db7-98tn2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 kubectl -- exec busybox-7b57f96db7-k4mxd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 kubectl -- exec busybox-7b57f96db7-v5mcz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 kubectl -- exec busybox-7b57f96db7-98tn2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 kubectl -- exec busybox-7b57f96db7-k4mxd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 kubectl -- exec busybox-7b57f96db7-v5mcz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 kubectl -- exec busybox-7b57f96db7-98tn2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 kubectl -- exec busybox-7b57f96db7-k4mxd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 kubectl -- exec busybox-7b57f96db7-v5mcz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 kubectl -- exec busybox-7b57f96db7-98tn2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 kubectl -- exec busybox-7b57f96db7-98tn2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 kubectl -- exec busybox-7b57f96db7-k4mxd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 kubectl -- exec busybox-7b57f96db7-k4mxd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 kubectl -- exec busybox-7b57f96db7-v5mcz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 kubectl -- exec busybox-7b57f96db7-v5mcz -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-102727 node add --alsologtostderr -v 5: (22.247142124s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-102727 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 cp testdata/cp-test.txt ha-102727:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 cp ha-102727:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile836139171/001/cp-test_ha-102727.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 cp ha-102727:/home/docker/cp-test.txt ha-102727-m02:/home/docker/cp-test_ha-102727_ha-102727-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m02 "sudo cat /home/docker/cp-test_ha-102727_ha-102727-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 cp ha-102727:/home/docker/cp-test.txt ha-102727-m03:/home/docker/cp-test_ha-102727_ha-102727-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m03 "sudo cat /home/docker/cp-test_ha-102727_ha-102727-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 cp ha-102727:/home/docker/cp-test.txt ha-102727-m04:/home/docker/cp-test_ha-102727_ha-102727-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m04 "sudo cat /home/docker/cp-test_ha-102727_ha-102727-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 cp testdata/cp-test.txt ha-102727-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 cp ha-102727-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile836139171/001/cp-test_ha-102727-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 cp ha-102727-m02:/home/docker/cp-test.txt ha-102727:/home/docker/cp-test_ha-102727-m02_ha-102727.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727 "sudo cat /home/docker/cp-test_ha-102727-m02_ha-102727.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 cp ha-102727-m02:/home/docker/cp-test.txt ha-102727-m03:/home/docker/cp-test_ha-102727-m02_ha-102727-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m03 "sudo cat /home/docker/cp-test_ha-102727-m02_ha-102727-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 cp ha-102727-m02:/home/docker/cp-test.txt ha-102727-m04:/home/docker/cp-test_ha-102727-m02_ha-102727-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m04 "sudo cat /home/docker/cp-test_ha-102727-m02_ha-102727-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 cp testdata/cp-test.txt ha-102727-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 cp ha-102727-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile836139171/001/cp-test_ha-102727-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 cp ha-102727-m03:/home/docker/cp-test.txt ha-102727:/home/docker/cp-test_ha-102727-m03_ha-102727.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727 "sudo cat /home/docker/cp-test_ha-102727-m03_ha-102727.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 cp ha-102727-m03:/home/docker/cp-test.txt ha-102727-m02:/home/docker/cp-test_ha-102727-m03_ha-102727-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m02 "sudo cat /home/docker/cp-test_ha-102727-m03_ha-102727-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 cp ha-102727-m03:/home/docker/cp-test.txt ha-102727-m04:/home/docker/cp-test_ha-102727-m03_ha-102727-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m04 "sudo cat /home/docker/cp-test_ha-102727-m03_ha-102727-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 cp testdata/cp-test.txt ha-102727-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 cp ha-102727-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile836139171/001/cp-test_ha-102727-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 cp ha-102727-m04:/home/docker/cp-test.txt ha-102727:/home/docker/cp-test_ha-102727-m04_ha-102727.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727 "sudo cat /home/docker/cp-test_ha-102727-m04_ha-102727.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 cp ha-102727-m04:/home/docker/cp-test.txt ha-102727-m02:/home/docker/cp-test_ha-102727-m04_ha-102727-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m02 "sudo cat /home/docker/cp-test_ha-102727-m04_ha-102727-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 cp ha-102727-m04:/home/docker/cp-test.txt ha-102727-m03:/home/docker/cp-test_ha-102727-m04_ha-102727-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 ssh -n ha-102727-m03 "sudo cat /home/docker/cp-test_ha-102727-m04_ha-102727-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-102727 node stop m02 --alsologtostderr -v 5: (12.598248181s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-102727 status --alsologtostderr -v 5: exit status 7 (709.715011ms)

                                                
                                                
-- stdout --
	ha-102727
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-102727-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-102727-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-102727-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:21:44.231126  199199 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:21:44.231428  199199 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:21:44.231440  199199 out.go:374] Setting ErrFile to fd 2...
	I1025 09:21:44.231444  199199 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:21:44.231699  199199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:21:44.231919  199199 out.go:368] Setting JSON to false
	I1025 09:21:44.231962  199199 mustload.go:65] Loading cluster: ha-102727
	I1025 09:21:44.232060  199199 notify.go:220] Checking for updates...
	I1025 09:21:44.232575  199199 config.go:182] Loaded profile config "ha-102727": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:21:44.232598  199199 status.go:174] checking status of ha-102727 ...
	I1025 09:21:44.233111  199199 cli_runner.go:164] Run: docker container inspect ha-102727 --format={{.State.Status}}
	I1025 09:21:44.253124  199199 status.go:371] ha-102727 host status = "Running" (err=<nil>)
	I1025 09:21:44.253174  199199 host.go:66] Checking if "ha-102727" exists ...
	I1025 09:21:44.253542  199199 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-102727
	I1025 09:21:44.271179  199199 host.go:66] Checking if "ha-102727" exists ...
	I1025 09:21:44.271455  199199 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:21:44.271494  199199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-102727
	I1025 09:21:44.289315  199199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/ha-102727/id_rsa Username:docker}
	I1025 09:21:44.387074  199199 ssh_runner.go:195] Run: systemctl --version
	I1025 09:21:44.393470  199199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:21:44.406925  199199 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:21:44.465385  199199 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-25 09:21:44.455114187 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:21:44.465896  199199 kubeconfig.go:125] found "ha-102727" server: "https://192.168.49.254:8443"
	I1025 09:21:44.465925  199199 api_server.go:166] Checking apiserver status ...
	I1025 09:21:44.465958  199199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:21:44.477842  199199 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup
	W1025 09:21:44.487715  199199 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:21:44.487775  199199 ssh_runner.go:195] Run: ls
	I1025 09:21:44.491895  199199 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1025 09:21:44.498610  199199 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1025 09:21:44.498640  199199 status.go:463] ha-102727 apiserver status = Running (err=<nil>)
	I1025 09:21:44.498651  199199 status.go:176] ha-102727 status: &{Name:ha-102727 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:21:44.498667  199199 status.go:174] checking status of ha-102727-m02 ...
	I1025 09:21:44.499001  199199 cli_runner.go:164] Run: docker container inspect ha-102727-m02 --format={{.State.Status}}
	I1025 09:21:44.517619  199199 status.go:371] ha-102727-m02 host status = "Stopped" (err=<nil>)
	I1025 09:21:44.517642  199199 status.go:384] host is not running, skipping remaining checks
	I1025 09:21:44.517650  199199 status.go:176] ha-102727-m02 status: &{Name:ha-102727-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:21:44.517672  199199 status.go:174] checking status of ha-102727-m03 ...
	I1025 09:21:44.517945  199199 cli_runner.go:164] Run: docker container inspect ha-102727-m03 --format={{.State.Status}}
	I1025 09:21:44.537220  199199 status.go:371] ha-102727-m03 host status = "Running" (err=<nil>)
	I1025 09:21:44.537251  199199 host.go:66] Checking if "ha-102727-m03" exists ...
	I1025 09:21:44.537598  199199 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-102727-m03
	I1025 09:21:44.556736  199199 host.go:66] Checking if "ha-102727-m03" exists ...
	I1025 09:21:44.557017  199199 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:21:44.557063  199199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-102727-m03
	I1025 09:21:44.576362  199199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/ha-102727-m03/id_rsa Username:docker}
	I1025 09:21:44.675086  199199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:21:44.687533  199199 kubeconfig.go:125] found "ha-102727" server: "https://192.168.49.254:8443"
	I1025 09:21:44.687560  199199 api_server.go:166] Checking apiserver status ...
	I1025 09:21:44.687596  199199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:21:44.698324  199199 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W1025 09:21:44.706501  199199 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:21:44.706558  199199 ssh_runner.go:195] Run: ls
	I1025 09:21:44.710128  199199 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1025 09:21:44.714472  199199 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1025 09:21:44.714498  199199 status.go:463] ha-102727-m03 apiserver status = Running (err=<nil>)
	I1025 09:21:44.714508  199199 status.go:176] ha-102727-m03 status: &{Name:ha-102727-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:21:44.714561  199199 status.go:174] checking status of ha-102727-m04 ...
	I1025 09:21:44.714805  199199 cli_runner.go:164] Run: docker container inspect ha-102727-m04 --format={{.State.Status}}
	I1025 09:21:44.733172  199199 status.go:371] ha-102727-m04 host status = "Running" (err=<nil>)
	I1025 09:21:44.733196  199199 host.go:66] Checking if "ha-102727-m04" exists ...
	I1025 09:21:44.733579  199199 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-102727-m04
	I1025 09:21:44.750996  199199 host.go:66] Checking if "ha-102727-m04" exists ...
	I1025 09:21:44.751238  199199 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:21:44.751282  199199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-102727-m04
	I1025 09:21:44.770086  199199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/ha-102727-m04/id_rsa Username:docker}
	I1025 09:21:44.867683  199199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:21:44.880256  199199 status.go:176] ha-102727-m04 status: &{Name:ha-102727-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 node start m02 --alsologtostderr -v 5
E1025 09:21:52.509983  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-102727 node start m02 --alsologtostderr -v 5: (7.680272397s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (108.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-102727 stop --alsologtostderr -v 5: (49.758874956s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 start --wait true --alsologtostderr -v 5
E1025 09:22:45.553004  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:22:45.559437  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:22:45.573890  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:22:45.595377  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:22:45.636908  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:22:45.718428  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:22:45.880250  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:22:46.202032  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:22:46.844165  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:22:48.125849  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:22:50.687908  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:22:55.809821  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:23:06.051599  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:23:15.575441  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:23:26.533169  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-102727 start --wait true --alsologtostderr -v 5: (58.68767239s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (108.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-102727 node delete m03 --alsologtostderr -v 5: (9.728303531s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (47.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 stop --alsologtostderr -v 5
E1025 09:24:07.494551  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-102727 stop --alsologtostderr -v 5: (47.107187697s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-102727 status --alsologtostderr -v 5: exit status 7 (117.432715ms)

                                                
                                                
-- stdout --
	ha-102727
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-102727-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-102727-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:24:42.108559  213316 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:24:42.108854  213316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:24:42.108865  213316 out.go:374] Setting ErrFile to fd 2...
	I1025 09:24:42.108872  213316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:24:42.109086  213316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:24:42.109293  213316 out.go:368] Setting JSON to false
	I1025 09:24:42.109333  213316 mustload.go:65] Loading cluster: ha-102727
	I1025 09:24:42.109411  213316 notify.go:220] Checking for updates...
	I1025 09:24:42.109784  213316 config.go:182] Loaded profile config "ha-102727": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:24:42.109808  213316 status.go:174] checking status of ha-102727 ...
	I1025 09:24:42.110251  213316 cli_runner.go:164] Run: docker container inspect ha-102727 --format={{.State.Status}}
	I1025 09:24:42.129969  213316 status.go:371] ha-102727 host status = "Stopped" (err=<nil>)
	I1025 09:24:42.130020  213316 status.go:384] host is not running, skipping remaining checks
	I1025 09:24:42.130033  213316 status.go:176] ha-102727 status: &{Name:ha-102727 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:24:42.130064  213316 status.go:174] checking status of ha-102727-m02 ...
	I1025 09:24:42.130412  213316 cli_runner.go:164] Run: docker container inspect ha-102727-m02 --format={{.State.Status}}
	I1025 09:24:42.148495  213316 status.go:371] ha-102727-m02 host status = "Stopped" (err=<nil>)
	I1025 09:24:42.148520  213316 status.go:384] host is not running, skipping remaining checks
	I1025 09:24:42.148526  213316 status.go:176] ha-102727-m02 status: &{Name:ha-102727-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:24:42.148545  213316 status.go:174] checking status of ha-102727-m04 ...
	I1025 09:24:42.148769  213316 cli_runner.go:164] Run: docker container inspect ha-102727-m04 --format={{.State.Status}}
	I1025 09:24:42.166396  213316 status.go:371] ha-102727-m04 host status = "Stopped" (err=<nil>)
	I1025 09:24:42.166423  213316 status.go:384] host is not running, skipping remaining checks
	I1025 09:24:42.166431  213316 status.go:176] ha-102727-m04 status: &{Name:ha-102727-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (47.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (57.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1025 09:25:29.416543  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-102727 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (56.438119932s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (57.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (63.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-102727 node add --control-plane --alsologtostderr -v 5: (1m2.891801599s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-102727 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (63.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.8s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-179550 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-179550 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (38.798407677s)
--- PASS: TestJSONOutput/start/Command (38.80s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.12s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-179550 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-179550 --output=json --user=testUser: (6.116495344s)
--- PASS: TestJSONOutput/stop/Command (6.12s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-280733 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-280733 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (76.8014ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a57439e6-67e8-4940-a95a-0fd878d3b2df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-280733] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e2058b3b-4817-4dca-b0c1-e667f423c1f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21794"}}
	{"specversion":"1.0","id":"b2057cd8-87cd-4040-bf63-1ff01886c737","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"90049654-c005-4280-9d70-d294f1995511","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig"}}
	{"specversion":"1.0","id":"434b2015-4169-483a-bc02-dd0468477408","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube"}}
	{"specversion":"1.0","id":"5c9a2f5b-c003-441f-a907-e981f0c21cd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"64c78b1a-86d6-4861-b77d-2c88b6a61953","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"28c13c03-6adc-4ba6-99e9-bbbff2fdb1d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-280733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-280733
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.12s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-544726 --network=
E1025 09:28:13.258387  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-544726 --network=: (31.931380563s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-544726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-544726
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-544726: (2.169150365s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.12s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.43s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-263075 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-263075 --network=bridge: (24.426325336s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-263075" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-263075
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-263075: (1.980661155s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.43s)

                                                
                                    
x
+
TestKicExistingNetwork (23.3s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1025 09:28:47.074156  134145 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1025 09:28:47.091082  134145 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1025 09:28:47.091156  134145 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1025 09:28:47.091172  134145 cli_runner.go:164] Run: docker network inspect existing-network
W1025 09:28:47.107718  134145 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1025 09:28:47.107752  134145 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1025 09:28:47.107775  134145 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1025 09:28:47.107924  134145 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1025 09:28:47.125140  134145 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b89a58b7fce0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:e2:93:21:98:bc} reservation:<nil>}
I1025 09:28:47.125505  134145 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a6820}
I1025 09:28:47.125536  134145 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1025 09:28:47.125618  134145 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1025 09:28:47.183976  134145 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-999916 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-999916 --network=existing-network: (21.154280206s)
helpers_test.go:175: Cleaning up "existing-network-999916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-999916
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-999916: (1.996859325s)
I1025 09:29:10.353440  134145 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.30s)

                                                
                                    
x
+
TestKicCustomSubnet (25.05s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-047846 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-047846 --subnet=192.168.60.0/24: (22.843574318s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-047846 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-047846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-047846
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-047846: (2.189012428s)
--- PASS: TestKicCustomSubnet (25.05s)

                                                
                                    
x
+
TestKicStaticIP (27.79s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-022243 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-022243 --static-ip=192.168.200.200: (25.475279591s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-022243 ip
helpers_test.go:175: Cleaning up "static-ip-022243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-022243
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-022243: (2.169392746s)
--- PASS: TestKicStaticIP (27.79s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (48.92s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-950715 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-950715 --driver=docker  --container-runtime=crio: (23.143873612s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-954102 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-954102 --driver=docker  --container-runtime=crio: (19.772413854s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-950715
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-954102
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-954102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-954102
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-954102: (2.418645238s)
helpers_test.go:175: Cleaning up "first-950715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-950715
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-950715: (2.359501316s)
--- PASS: TestMinikubeProfile (48.92s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-035369 --memory=3072 --mount-string /tmp/TestMountStartserial577385990/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-035369 --memory=3072 --mount-string /tmp/TestMountStartserial577385990/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.37719583s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-035369 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-052668 --memory=3072 --mount-string /tmp/TestMountStartserial577385990/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-052668 --memory=3072 --mount-string /tmp/TestMountStartserial577385990/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.588419021s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-052668 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-035369 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-035369 --alsologtostderr -v=5: (1.715276362s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-052668 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-052668
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-052668: (1.266325334s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.78s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-052668
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-052668: (6.774983966s)
--- PASS: TestMountStart/serial/RestartStopped (7.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-052668 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (96.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-815809 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1025 09:31:52.510247  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:32:45.552856  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-815809 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m36.044204078s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (96.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-815809 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-815809 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-815809 -- rollout status deployment/busybox: (3.323999744s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-815809 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-815809 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-815809 -- exec busybox-7b57f96db7-jjknt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-815809 -- exec busybox-7b57f96db7-lf8dr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-815809 -- exec busybox-7b57f96db7-jjknt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-815809 -- exec busybox-7b57f96db7-lf8dr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-815809 -- exec busybox-7b57f96db7-jjknt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-815809 -- exec busybox-7b57f96db7-lf8dr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.71s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-815809 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-815809 -- exec busybox-7b57f96db7-jjknt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-815809 -- exec busybox-7b57f96db7-jjknt -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-815809 -- exec busybox-7b57f96db7-lf8dr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-815809 -- exec busybox-7b57f96db7-lf8dr -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-815809 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-815809 -v=5 --alsologtostderr: (22.982797579s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.64s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-815809 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 cp testdata/cp-test.txt multinode-815809:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 ssh -n multinode-815809 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 cp multinode-815809:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3613398254/001/cp-test_multinode-815809.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 ssh -n multinode-815809 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 cp multinode-815809:/home/docker/cp-test.txt multinode-815809-m02:/home/docker/cp-test_multinode-815809_multinode-815809-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 ssh -n multinode-815809 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 ssh -n multinode-815809-m02 "sudo cat /home/docker/cp-test_multinode-815809_multinode-815809-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 cp multinode-815809:/home/docker/cp-test.txt multinode-815809-m03:/home/docker/cp-test_multinode-815809_multinode-815809-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 ssh -n multinode-815809 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 ssh -n multinode-815809-m03 "sudo cat /home/docker/cp-test_multinode-815809_multinode-815809-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 cp testdata/cp-test.txt multinode-815809-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 ssh -n multinode-815809-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 cp multinode-815809-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3613398254/001/cp-test_multinode-815809-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 ssh -n multinode-815809-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 cp multinode-815809-m02:/home/docker/cp-test.txt multinode-815809:/home/docker/cp-test_multinode-815809-m02_multinode-815809.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 ssh -n multinode-815809-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 ssh -n multinode-815809 "sudo cat /home/docker/cp-test_multinode-815809-m02_multinode-815809.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 cp multinode-815809-m02:/home/docker/cp-test.txt multinode-815809-m03:/home/docker/cp-test_multinode-815809-m02_multinode-815809-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 ssh -n multinode-815809-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 ssh -n multinode-815809-m03 "sudo cat /home/docker/cp-test_multinode-815809-m02_multinode-815809-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 cp testdata/cp-test.txt multinode-815809-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 ssh -n multinode-815809-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 cp multinode-815809-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3613398254/001/cp-test_multinode-815809-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 ssh -n multinode-815809-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 cp multinode-815809-m03:/home/docker/cp-test.txt multinode-815809:/home/docker/cp-test_multinode-815809-m03_multinode-815809.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 ssh -n multinode-815809-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 ssh -n multinode-815809 "sudo cat /home/docker/cp-test_multinode-815809-m03_multinode-815809.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 cp multinode-815809-m03:/home/docker/cp-test.txt multinode-815809-m02:/home/docker/cp-test_multinode-815809-m03_multinode-815809-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 ssh -n multinode-815809-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 ssh -n multinode-815809-m02 "sudo cat /home/docker/cp-test_multinode-815809-m03_multinode-815809-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.86s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-815809 node stop m03: (1.266687768s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-815809 status: exit status 7 (499.386501ms)

                                                
                                                
-- stdout --
	multinode-815809
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-815809-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-815809-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-815809 status --alsologtostderr: exit status 7 (495.423392ms)

                                                
                                                
-- stdout --
	multinode-815809
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-815809-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-815809-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:33:38.884839  273004 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:33:38.885088  273004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:33:38.885097  273004 out.go:374] Setting ErrFile to fd 2...
	I1025 09:33:38.885101  273004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:33:38.885283  273004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:33:38.885459  273004 out.go:368] Setting JSON to false
	I1025 09:33:38.885496  273004 mustload.go:65] Loading cluster: multinode-815809
	I1025 09:33:38.885596  273004 notify.go:220] Checking for updates...
	I1025 09:33:38.885839  273004 config.go:182] Loaded profile config "multinode-815809": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:33:38.885854  273004 status.go:174] checking status of multinode-815809 ...
	I1025 09:33:38.886266  273004 cli_runner.go:164] Run: docker container inspect multinode-815809 --format={{.State.Status}}
	I1025 09:33:38.905678  273004 status.go:371] multinode-815809 host status = "Running" (err=<nil>)
	I1025 09:33:38.905701  273004 host.go:66] Checking if "multinode-815809" exists ...
	I1025 09:33:38.905973  273004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-815809
	I1025 09:33:38.923450  273004 host.go:66] Checking if "multinode-815809" exists ...
	I1025 09:33:38.923726  273004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:33:38.923771  273004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-815809
	I1025 09:33:38.941313  273004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/multinode-815809/id_rsa Username:docker}
	I1025 09:33:39.038841  273004 ssh_runner.go:195] Run: systemctl --version
	I1025 09:33:39.045204  273004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:33:39.057303  273004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:33:39.111857  273004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-25 09:33:39.102309821 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:33:39.112494  273004 kubeconfig.go:125] found "multinode-815809" server: "https://192.168.67.2:8443"
	I1025 09:33:39.112526  273004 api_server.go:166] Checking apiserver status ...
	I1025 09:33:39.112561  273004 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:33:39.124232  273004 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1248/cgroup
	W1025 09:33:39.132522  273004 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1248/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:33:39.132577  273004 ssh_runner.go:195] Run: ls
	I1025 09:33:39.136202  273004 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1025 09:33:39.140307  273004 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1025 09:33:39.140328  273004 status.go:463] multinode-815809 apiserver status = Running (err=<nil>)
	I1025 09:33:39.140337  273004 status.go:176] multinode-815809 status: &{Name:multinode-815809 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:33:39.140401  273004 status.go:174] checking status of multinode-815809-m02 ...
	I1025 09:33:39.140642  273004 cli_runner.go:164] Run: docker container inspect multinode-815809-m02 --format={{.State.Status}}
	I1025 09:33:39.159212  273004 status.go:371] multinode-815809-m02 host status = "Running" (err=<nil>)
	I1025 09:33:39.159235  273004 host.go:66] Checking if "multinode-815809-m02" exists ...
	I1025 09:33:39.159510  273004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-815809-m02
	I1025 09:33:39.176436  273004 host.go:66] Checking if "multinode-815809-m02" exists ...
	I1025 09:33:39.176774  273004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:33:39.176825  273004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-815809-m02
	I1025 09:33:39.194337  273004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21794-130604/.minikube/machines/multinode-815809-m02/id_rsa Username:docker}
	I1025 09:33:39.290481  273004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:33:39.302728  273004 status.go:176] multinode-815809-m02 status: &{Name:multinode-815809-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:33:39.302762  273004 status.go:174] checking status of multinode-815809-m03 ...
	I1025 09:33:39.303056  273004 cli_runner.go:164] Run: docker container inspect multinode-815809-m03 --format={{.State.Status}}
	I1025 09:33:39.320832  273004 status.go:371] multinode-815809-m03 host status = "Stopped" (err=<nil>)
	I1025 09:33:39.320852  273004 status.go:384] host is not running, skipping remaining checks
	I1025 09:33:39.320858  273004 status.go:176] multinode-815809-m03 status: &{Name:multinode-815809-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-815809 node start m03 -v=5 --alsologtostderr: (7.215418058s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-815809
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-815809
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-815809: (29.59302129s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-815809 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-815809 --wait=true -v=5 --alsologtostderr: (51.941544319s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-815809
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.66s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-815809 node delete m03: (4.634236324s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-815809 stop: (30.134928087s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-815809 status: exit status 7 (97.409145ms)

                                                
                                                
-- stdout --
	multinode-815809
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-815809-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-815809 status --alsologtostderr: exit status 7 (97.717876ms)

                                                
                                                
-- stdout --
	multinode-815809
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-815809-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:44.426979  282828 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:44.427223  282828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:44.427231  282828 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:44.427235  282828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:44.427450  282828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:35:44.427643  282828 out.go:368] Setting JSON to false
	I1025 09:35:44.427674  282828 mustload.go:65] Loading cluster: multinode-815809
	I1025 09:35:44.427776  282828 notify.go:220] Checking for updates...
	I1025 09:35:44.428039  282828 config.go:182] Loaded profile config "multinode-815809": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:44.428057  282828 status.go:174] checking status of multinode-815809 ...
	I1025 09:35:44.428537  282828 cli_runner.go:164] Run: docker container inspect multinode-815809 --format={{.State.Status}}
	I1025 09:35:44.447749  282828 status.go:371] multinode-815809 host status = "Stopped" (err=<nil>)
	I1025 09:35:44.447770  282828 status.go:384] host is not running, skipping remaining checks
	I1025 09:35:44.447779  282828 status.go:176] multinode-815809 status: &{Name:multinode-815809 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:35:44.447828  282828 status.go:174] checking status of multinode-815809-m02 ...
	I1025 09:35:44.448069  282828 cli_runner.go:164] Run: docker container inspect multinode-815809-m02 --format={{.State.Status}}
	I1025 09:35:44.465213  282828 status.go:371] multinode-815809-m02 host status = "Stopped" (err=<nil>)
	I1025 09:35:44.465234  282828 status.go:384] host is not running, skipping remaining checks
	I1025 09:35:44.465240  282828 status.go:176] multinode-815809-m02 status: &{Name:multinode-815809-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (27.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-815809 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-815809 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (26.956919228s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-815809 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (27.57s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-815809
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-815809-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-815809-m02 --driver=docker  --container-runtime=crio: exit status 14 (74.845269ms)

                                                
                                                
-- stdout --
	* [multinode-815809-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21794
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-815809-m02' is duplicated with machine name 'multinode-815809-m02' in profile 'multinode-815809'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-815809-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-815809-m03 --driver=docker  --container-runtime=crio: (20.589596471s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-815809
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-815809: exit status 80 (287.257493ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-815809 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-815809-m03 already exists in multinode-815809-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-815809-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-815809-m03: (2.388141655s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.40s)

                                                
                                    
x
+
TestScheduledStopUnix (96.12s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-692905 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-692905 --memory=3072 --driver=docker  --container-runtime=crio: (20.094932373s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-692905 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-692905 -n scheduled-stop-692905
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-692905 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1025 09:44:18.110441  134145 retry.go:31] will retry after 80.803µs: open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/scheduled-stop-692905/pid: no such file or directory
I1025 09:44:18.111619  134145 retry.go:31] will retry after 115.17µs: open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/scheduled-stop-692905/pid: no such file or directory
I1025 09:44:18.112750  134145 retry.go:31] will retry after 328.47µs: open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/scheduled-stop-692905/pid: no such file or directory
I1025 09:44:18.113899  134145 retry.go:31] will retry after 441.168µs: open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/scheduled-stop-692905/pid: no such file or directory
I1025 09:44:18.115037  134145 retry.go:31] will retry after 324.828µs: open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/scheduled-stop-692905/pid: no such file or directory
I1025 09:44:18.116182  134145 retry.go:31] will retry after 824.831µs: open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/scheduled-stop-692905/pid: no such file or directory
I1025 09:44:18.117334  134145 retry.go:31] will retry after 814.026µs: open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/scheduled-stop-692905/pid: no such file or directory
I1025 09:44:18.118479  134145 retry.go:31] will retry after 2.103444ms: open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/scheduled-stop-692905/pid: no such file or directory
I1025 09:44:18.121670  134145 retry.go:31] will retry after 1.753124ms: open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/scheduled-stop-692905/pid: no such file or directory
I1025 09:44:18.123903  134145 retry.go:31] will retry after 5.079828ms: open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/scheduled-stop-692905/pid: no such file or directory
I1025 09:44:18.129071  134145 retry.go:31] will retry after 3.537072ms: open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/scheduled-stop-692905/pid: no such file or directory
I1025 09:44:18.133279  134145 retry.go:31] will retry after 11.325945ms: open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/scheduled-stop-692905/pid: no such file or directory
I1025 09:44:18.145478  134145 retry.go:31] will retry after 13.573808ms: open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/scheduled-stop-692905/pid: no such file or directory
I1025 09:44:18.159725  134145 retry.go:31] will retry after 25.577128ms: open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/scheduled-stop-692905/pid: no such file or directory
I1025 09:44:18.185999  134145 retry.go:31] will retry after 38.106675ms: open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/scheduled-stop-692905/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-692905 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-692905 -n scheduled-stop-692905
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-692905
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-692905 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-692905
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-692905: exit status 7 (78.980539ms)

                                                
                                                
-- stdout --
	scheduled-stop-692905
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-692905 -n scheduled-stop-692905
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-692905 -n scheduled-stop-692905: exit status 7 (78.1797ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-692905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-692905
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-692905: (4.527577991s)
--- PASS: TestScheduledStopUnix (96.12s)

                                                
                                    
x
+
TestInsufficientStorage (9.44s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-956587 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-956587 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.930406328s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"881f6279-00f6-4d4a-9169-848ed8f5373e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-956587] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2c5f3147-f906-4207-ba9c-386060d775ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21794"}}
	{"specversion":"1.0","id":"e2e208ba-2374-442c-abb9-c415c60fe49f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"03a1248b-a37f-4948-b90c-1fefad4ddb12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig"}}
	{"specversion":"1.0","id":"b571f221-c1e0-411d-a4c1-a31baf6f31a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube"}}
	{"specversion":"1.0","id":"f7b32632-6258-40e5-81ab-c619ef61bbf2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0324e1a6-7a10-489e-8b69-55a165f2ca79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5cf11a02-c3ff-48ec-bdc6-f1bf1acea154","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"195d57d5-ae39-434a-8258-e8b1d201b2dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f64c8724-3d5c-4045-a7db-0ed0ab7d133a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b73a3e93-a095-491f-ba1d-bd10f9de5a58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"9b99d08f-82ed-4fbd-8c8b-471a442b21c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-956587\" primary control-plane node in \"insufficient-storage-956587\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c8b86ccd-384e-4b5e-b641-c1f197d19188","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"773f67b3-0e63-4126-9444-b15b981dd104","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f6047a81-e6dc-4182-8c76-4df123729000","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-956587 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-956587 --output=json --layout=cluster: exit status 7 (294.923071ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-956587","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-956587","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 09:45:40.898974  304527 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-956587" does not appear in /home/jenkins/minikube-integration/21794-130604/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-956587 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-956587 --output=json --layout=cluster: exit status 7 (289.537384ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-956587","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-956587","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 09:45:41.189061  304637 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-956587" does not appear in /home/jenkins/minikube-integration/21794-130604/kubeconfig
	E1025 09:45:41.199508  304637 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/insufficient-storage-956587/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-956587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-956587
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-956587: (1.928098668s)
--- PASS: TestInsufficientStorage (9.44s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (50.05s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2883125829 start -p running-upgrade-465613 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2883125829 start -p running-upgrade-465613 --memory=3072 --vm-driver=docker  --container-runtime=crio: (21.585984428s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-465613 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-465613 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.364808925s)
helpers_test.go:175: Cleaning up "running-upgrade-465613" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-465613
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-465613: (2.436825292s)
--- PASS: TestRunningBinaryUpgrade (50.05s)

                                                
                                    
x
+
TestKubernetesUpgrade (300.47s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-129588 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1025 09:47:45.552361  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-129588 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.740279643s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-129588
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-129588: (2.232109236s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-129588 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-129588 status --format={{.Host}}: exit status 7 (83.945614ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-129588 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-129588 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m23.346918567s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-129588 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-129588 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-129588 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (95.729498ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-129588] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21794
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-129588
	    minikube start -p kubernetes-upgrade-129588 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1295882 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-129588 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-129588 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-129588 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.320369601s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-129588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-129588
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-129588: (2.567092877s)
--- PASS: TestKubernetesUpgrade (300.47s)

                                                
                                    
x
+
TestMissingContainerUpgrade (72.21s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1405297925 start -p missing-upgrade-027112 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1405297925 start -p missing-upgrade-027112 --memory=3072 --driver=docker  --container-runtime=crio: (26.642782574s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-027112
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-027112: (1.72325826s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-027112
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-027112 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-027112 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.04378326s)
helpers_test.go:175: Cleaning up "missing-upgrade-027112" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-027112
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-027112: (2.089373361s)
--- PASS: TestMissingContainerUpgrade (72.21s)

                                                
                                    
x
+
TestPause/serial/Start (48.74s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-175355 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-175355 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (48.735440352s)
--- PASS: TestPause/serial/Start (48.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-617681 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-617681 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (103.81024ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-617681] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21794
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (32.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-617681 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-617681 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.249139857s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-617681 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (32.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-617681 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-617681 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (15.277648024s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-617681 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-617681 status -o json: exit status 2 (316.771609ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-617681","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-617681
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-617681: (2.258583501s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.85s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.15s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-175355 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-175355 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.135496319s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-617681 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-617681 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.882194154s)
--- PASS: TestNoKubernetes/serial/Start (6.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-617681 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-617681 "sudo systemctl is-active --quiet service kubelet": exit status 1 (308.629712ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (1.029540876s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-617681
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-617681: (1.310548374s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-617681 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-617681 --driver=docker  --container-runtime=crio: (7.516014756s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-035825 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-035825 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (188.900946ms)

                                                
                                                
-- stdout --
	* [false-035825] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21794
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:46:50.954840  326287 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:46:50.955048  326287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:46:50.955056  326287 out.go:374] Setting ErrFile to fd 2...
	I1025 09:46:50.955060  326287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:46:50.955272  326287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-130604/.minikube/bin
	I1025 09:46:50.955744  326287 out.go:368] Setting JSON to false
	I1025 09:46:50.956857  326287 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5355,"bootTime":1761380256,"procs":276,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:46:50.956956  326287 start.go:141] virtualization: kvm guest
	I1025 09:46:50.958710  326287 out.go:179] * [false-035825] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:46:50.959928  326287 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:46:50.959927  326287 notify.go:220] Checking for updates...
	I1025 09:46:50.961255  326287 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:46:50.962521  326287 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-130604/kubeconfig
	I1025 09:46:50.964041  326287 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-130604/.minikube
	I1025 09:46:50.965450  326287 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:46:50.966785  326287 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:46:50.968755  326287 config.go:182] Loaded profile config "NoKubernetes-617681": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1025 09:46:50.968894  326287 config.go:182] Loaded profile config "cert-expiration-225615": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:46:50.969029  326287 config.go:182] Loaded profile config "force-systemd-flag-170120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:46:50.969152  326287 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:46:50.995081  326287 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:46:50.995361  326287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:46:51.072328  326287 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-25 09:46:51.061456743 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:46:51.072498  326287 docker.go:318] overlay module found
	I1025 09:46:51.074083  326287 out.go:179] * Using the docker driver based on user configuration
	I1025 09:46:51.075131  326287 start.go:305] selected driver: docker
	I1025 09:46:51.075144  326287 start.go:925] validating driver "docker" against <nil>
	I1025 09:46:51.075159  326287 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:46:51.076806  326287 out.go:203] 
	W1025 09:46:51.077804  326287 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1025 09:46:51.078905  326287 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-035825 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-035825

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-035825

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-035825

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-035825

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-035825

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-035825

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-035825

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-035825

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-035825

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-035825

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-035825

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-035825" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-035825" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 09:46:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-225615
contexts:
- context:
cluster: cert-expiration-225615
extensions:
- extension:
last-update: Sat, 25 Oct 2025 09:46:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-225615
name: cert-expiration-225615
current-context: ""
kind: Config
users:
- name: cert-expiration-225615
user:
client-certificate: /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/cert-expiration-225615/client.crt
client-key: /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/cert-expiration-225615/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-035825

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035825"

                                                
                                                
----------------------- debugLogs end: false-035825 [took: 3.707837324s] --------------------------------
helpers_test.go:175: Cleaning up "false-035825" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-035825
--- PASS: TestNetworkPlugins/group/false (4.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-617681 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-617681 "sudo systemctl is-active --quiet service kubelet": exit status 1 (302.01626ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (67.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2111956377 start -p stopped-upgrade-654495 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2111956377 start -p stopped-upgrade-654495 --memory=3072 --vm-driver=docker  --container-runtime=crio: (49.414077042s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2111956377 -p stopped-upgrade-654495 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2111956377 -p stopped-upgrade-654495 stop: (2.470124876s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-654495 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-654495 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (16.030833699s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (67.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-654495
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-654495: (1.033627095s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (36.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-035825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-035825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (36.777443282s)
--- PASS: TestNetworkPlugins/group/auto/Start (36.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-035825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-035825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m10.248863651s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-035825 "pgrep -a kubelet"
I1025 09:49:16.784574  134145 config.go:182] Loaded profile config "auto-035825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-035825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8gr2s" [edd59057-3e24-4f91-843e-bfa065a11cc6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8gr2s" [edd59057-3e24-4f91-843e-bfa065a11cc6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004262275s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (51.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-035825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-035825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (51.745995715s)
--- PASS: TestNetworkPlugins/group/calico/Start (51.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-035825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-035825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-035825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-035825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-035825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (53.197812461s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-mnwpw" [88aa9b88-a41e-4e88-a79c-b4537b824be0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004470162s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-vbgkv" [78fadd3a-9408-4c27-b46d-0dc3e63623b7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00476759s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-035825 "pgrep -a kubelet"
I1025 09:50:21.310405  134145 config.go:182] Loaded profile config "kindnet-035825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-035825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-grqbk" [154e3a45-8385-4371-81b7-0bc421f983cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
I1025 09:50:21.781182  134145 config.go:182] Loaded profile config "calico-035825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:352: "netcat-cd4db9dbf-grqbk" [154e3a45-8385-4371-81b7-0bc421f983cb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004456349s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-035825 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-035825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-z4nf9" [e33f4618-8e2c-497c-9923-0583c6567d3b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-z4nf9" [e33f4618-8e2c-497c-9923-0583c6567d3b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.00408104s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-035825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-035825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-035825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-035825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-035825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-035825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-035825 "pgrep -a kubelet"
I1025 09:50:38.755392  134145 config.go:182] Loaded profile config "custom-flannel-035825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-035825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jbdvm" [fa2e88d3-11ce-4f2f-940b-2c6533b4ad27] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jbdvm" [fa2e88d3-11ce-4f2f-940b-2c6533b4ad27] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.004257815s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-035825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-035825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-035825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-035825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-035825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m11.675682499s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (49.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-035825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-035825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (49.92557605s)
--- PASS: TestNetworkPlugins/group/flannel/Start (49.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (38.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-035825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-035825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (38.987853145s)
--- PASS: TestNetworkPlugins/group/bridge/Start (38.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-vcj98" [91e96bcf-ab9d-4120-bc00-96d1328d10d2] Running
E1025 09:51:52.509853  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/addons-273872/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.002991886s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-035825 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-035825 "pgrep -a kubelet"
I1025 09:51:54.119067  134145 config.go:182] Loaded profile config "bridge-035825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-035825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-thh2v" [cbb7835a-0219-4041-a198-44f3681cec48] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
I1025 09:51:54.377798  134145 config.go:182] Loaded profile config "flannel-035825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:352: "netcat-cd4db9dbf-thh2v" [cbb7835a-0219-4041-a198-44f3681cec48] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003488968s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-035825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-np85s" [fcb1d6dc-b18c-442f-81c4-13660b35886a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-np85s" [fcb1d6dc-b18c-442f-81c4-13660b35886a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004161154s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-035825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-035825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-035825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-035825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-035825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-035825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-035825 "pgrep -a kubelet"
I1025 09:52:06.068647  134145 config.go:182] Loaded profile config "enable-default-cni-035825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-035825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sdrg9" [47ffd2e7-8684-4c6d-b287-dca2b9be4e8c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sdrg9" [47ffd2e7-8684-4c6d-b287-dca2b9be4e8c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.00433583s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-035825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-035825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-035825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-676314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-676314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.524094479s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (64.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-656799 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-656799 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m4.934302503s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (64.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-880773 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-880773 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m14.798495132s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (30.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-042675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 09:52:45.553138  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-042675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (30.846242213s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (30.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-042675 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-042675 --alsologtostderr -v=3: (8.198185087s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-676314 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f284177c-1d8d-4d46-8b15-3d8cb988f9d5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f284177c-1d8d-4d46-8b15-3d8cb988f9d5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004294443s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-676314 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-042675 -n newest-cni-042675
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-042675 -n newest-cni-042675: exit status 7 (82.624198ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-042675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-042675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-042675 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.586713771s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-042675 -n newest-cni-042675
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (17.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-676314 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-676314 --alsologtostderr -v=3: (17.511065151s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (17.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-042675 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-656799 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e58484e4-93ad-4c1e-af87-8034efb88486] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e58484e4-93ad-4c1e-af87-8034efb88486] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003848455s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-656799 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (39.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-846915 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-846915 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (39.971400823s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (39.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-656799 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-656799 --alsologtostderr -v=3: (16.40268879s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-676314 -n old-k8s-version-676314
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-676314 -n old-k8s-version-676314: exit status 7 (84.287005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-676314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-676314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-676314 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.251471141s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-676314 -n old-k8s-version-676314
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-880773 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f76f3cf0-8a0d-49fb-82e3-f5be92acdc5c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f76f3cf0-8a0d-49fb-82e3-f5be92acdc5c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004101739s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-880773 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-656799 -n no-preload-656799
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-656799 -n no-preload-656799: exit status 7 (102.562038ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-656799 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (24.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-656799 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-656799 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (24.527032939s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-656799 -n no-preload-656799
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (24.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-880773 --alsologtostderr -v=3
E1025 09:54:17.003980  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/auto-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:54:17.010487  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/auto-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:54:17.021944  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/auto-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:54:17.043379  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/auto-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:54:17.084825  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/auto-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:54:17.166306  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/auto-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:54:17.327830  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/auto-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:54:17.649809  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/auto-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:54:18.291288  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/auto-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-880773 --alsologtostderr -v=3: (18.977763119s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-880773 -n default-k8s-diff-port-880773
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-880773 -n default-k8s-diff-port-880773: exit status 7 (81.864756ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-880773 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-880773 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 09:54:19.573092  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/auto-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-880773 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.871459293s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-880773 -n default-k8s-diff-port-880773
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-846915 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7deecc20-1509-4c22-90d3-ebbe7e9e363f] Pending
helpers_test.go:352: "busybox" [7deecc20-1509-4c22-90d3-ebbe7e9e363f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1025 09:54:22.135230  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/auto-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [7deecc20-1509-4c22-90d3-ebbe7e9e363f] Running
E1025 09:54:27.257692  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/auto-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004442246s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-846915 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-45hnf" [4bfa16b2-fe16-47c9-8bd7-63c64dae30ac] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003979721s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-45hnf" [4bfa16b2-fe16-47c9-8bd7-63c64dae30ac] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004289118s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-656799 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-846915 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-846915 --alsologtostderr -v=3: (18.107618073s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-656799 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-q7z2s" [18009715-0497-4ac7-ae7f-2e2ec645bf27] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004003471s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-q7z2s" [18009715-0497-4ac7-ae7f-2e2ec645bf27] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003031052s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-676314 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-676314 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-846915 -n embed-certs-846915
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-846915 -n embed-certs-846915: exit status 7 (86.886849ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-846915 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (47.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-846915 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-846915 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (47.097067335s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-846915 -n embed-certs-846915
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (47.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qzqj5" [d067ad03-ccfb-4849-9d46-03bf81ecb805] Running
E1025 09:55:15.016185  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/kindnet-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:15.022591  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/kindnet-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:15.033984  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/kindnet-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:15.055399  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/kindnet-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:15.097272  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/kindnet-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:15.178721  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/kindnet-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:15.340221  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/kindnet-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:15.463839  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/calico-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:15.470249  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/calico-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:15.481616  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/calico-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:15.503074  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/calico-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:15.544564  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/calico-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:15.626300  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/calico-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:15.661727  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/kindnet-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:15.788161  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/calico-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:16.109947  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/calico-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:16.303630  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/kindnet-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003892353s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qzqj5" [d067ad03-ccfb-4849-9d46-03bf81ecb805] Running
E1025 09:55:16.751469  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/calico-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:17.585763  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/kindnet-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:18.033594  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/calico-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:20.148051  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/kindnet-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:20.595989  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/calico-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003255728s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-880773 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-880773 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ml7nd" [4ae1da5f-f4fb-4da0-8e88-c4b69df12b73] Running
E1025 09:55:38.916226  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/custom-flannel-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:38.922679  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/custom-flannel-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:38.934041  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/custom-flannel-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:38.943484  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/auto-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:38.955901  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/custom-flannel-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:38.997375  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/custom-flannel-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:39.078944  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/custom-flannel-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:39.240441  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/custom-flannel-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:39.562194  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/custom-flannel-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:40.203989  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/custom-flannel-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:41.486268  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/custom-flannel-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003327182s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ml7nd" [4ae1da5f-f4fb-4da0-8e88-c4b69df12b73] Running
E1025 09:55:44.047787  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/custom-flannel-035825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:48.622417  134145 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/functional-063906/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004225477s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-846915 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-846915 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    

Test skip (26/326)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-035825 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-035825

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-035825

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-035825

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-035825

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-035825

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-035825

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-035825

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-035825

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-035825

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-035825

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-035825

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-035825" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-035825" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 09:46:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-225615
contexts:
- context:
cluster: cert-expiration-225615
extensions:
- extension:
last-update: Sat, 25 Oct 2025 09:46:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-225615
name: cert-expiration-225615
current-context: ""
kind: Config
users:
- name: cert-expiration-225615
user:
client-certificate: /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/cert-expiration-225615/client.crt
client-key: /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/cert-expiration-225615/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-035825

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035825"

                                                
                                                
----------------------- debugLogs end: kubenet-035825 [took: 3.589750666s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-035825" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-035825
--- SKIP: TestNetworkPlugins/group/kubenet (3.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-035825 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-035825

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-035825

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-035825

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-035825

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-035825

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-035825

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-035825

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-035825

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-035825

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-035825

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-035825

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-035825" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-035825

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-035825

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-035825

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-035825

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-035825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-035825" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 09:46:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-225615
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21794-130604/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 09:46:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: force-systemd-flag-170120
contexts:
- context:
cluster: cert-expiration-225615
extensions:
- extension:
last-update: Sat, 25 Oct 2025 09:46:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-225615
name: cert-expiration-225615
- context:
cluster: force-systemd-flag-170120
extensions:
- extension:
last-update: Sat, 25 Oct 2025 09:46:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: force-systemd-flag-170120
name: force-systemd-flag-170120
current-context: force-systemd-flag-170120
kind: Config
users:
- name: cert-expiration-225615
user:
client-certificate: /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/cert-expiration-225615/client.crt
client-key: /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/cert-expiration-225615/client.key
- name: force-systemd-flag-170120
user:
client-certificate: /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/force-systemd-flag-170120/client.crt
client-key: /home/jenkins/minikube-integration/21794-130604/.minikube/profiles/force-systemd-flag-170120/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-035825

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-035825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035825"

                                                
                                                
----------------------- debugLogs end: cilium-035825 [took: 5.852051582s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-035825" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-035825
--- SKIP: TestNetworkPlugins/group/cilium (6.06s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-001549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-001549
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard